report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Our investigation of DCAA hotline allegations and our DCAA-wide follow- up audit document systemic weaknesses in DCAA’s management environment and structure for assuring audit quality. Last year, our investigation of hotline allegations substantiated auditor concerns made on all 14 audits we reviewed at two locations and 62 forward pricing reports we investigated at a third location. We found that (1) workpapers did not support reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate audit evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. In addition, we found that contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions, and opinions of some audits—a serious independence issue. This year, our follow-on audit found DCAA-wide audit quality problems similar to those identified in our investigation, including compromise of auditor independence, insufficient audit testing to support conclusions and opinions, and inadequate planning and supervision. For example, of the 69 audits and cost-related assignments we reviewed, 65 exhibited serious GAGAS and other deficiencies that rendered them unreliable for decisions on contract awards and contract management and oversight. Although not as serious, the remaining four audits also had GAGAS compliance problems. Of the 69 audits and cost-related assignments, 37 covered key contractor business systems and related controls, including cost accounting, estimating, and billing systems. Contracting officers rely on the results of these audits for 3 or more years to make decisions on pricing, contract awards, and payments. In addition, while DCAA did not consider 26 of the 32 cost-related assignments we reviewed to be GAGAS audits, DCAA did not perform sufficient testing to support reported conclusions on that work related to contractor billings. DCAA has rescinded 81 audit reports in response to our work and the DOD Inspector General’s (IG) follow-up audit because the audit evidence was outdated, insufficient, or inconsistent with reported conclusions and opinions and reliance on these reports for contracting decisions could pose a problem. About one-third of the rescinded reports relate to unsupported opinions on contractor internal controls and were used as the basis for risk-assessments and planning on subsequent internal control and cost-related audits. Other rescinded reports relate to CAS compliance and contract pricing decisions. Because the conclusions and opinions in the rescinded reports were used to assess risk in planning subsequent audits, they impact the reliability of hundreds of other audits and contracting decisions covering billions of dollars in DOD expenditures. Our hotline investigation found numerous examples where DCAA failed to comply with GAGAS. For example, contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions and opinions, and reporting in three cases we investigated—a serious independence issue. For 14 audits at two DCAA locations, we found that (1) audit documentation did not support the reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. We also substantiated allegations that forward pricing audit reports at a third DCAA location were issued before supervisors completed their review of the audit documentation because of the 20- to 30-day time frames required to support contract negotiations. Throughout our investigation, auditors at each of the three locations addressed in the hotline allegations told us that the limited number of hours approved for their audits directly affected the sufficiency of audit testing. Deficient audits do not provide assurance that billions of dollars in annual payments to these contactors complied with the FAR, CAS, or contract terms. We also found that DCAA managers took actions against staff at two locations, attempting to intimidate auditors, prevent them from speaking with investigators, and creating a generally abusive work environment. The following discussion highlights some of the examples from our investigation. In planning an estimating system audit of a major aerospace company, DCAA made an up-front agreement with the contractor to limit the scope of work and basis for the audit opinion. The contractor was unable to develop compliant estimates, leading to a draft audit opinion of “inadequate-in-part.” The contractor objected to the draft findings, and DCAA management assigned a new supervisory auditor. DCAA management then threatened the senior auditor with personnel action if he did not delete the findings from the report and change the draft audit opinion to “adequate.” Another audit of the above contractor related to a revised proposal that was submitted after DCAA had reported an “adverse” (inadequate) opinion on the contractor’s 2005 proposal to provide commercial satellite launch capability. At the beginning of the audit, the buying command and contractor officials met with a DCAA regional audit manager to determine how to resolve CAS compliance issues and obtain a favorable audit opinion. Although the contractor failed to provide all cost information requested for the audit, the DCAA regional audit manager (RAM) instructed the auditors that they could not base an “adverse” opinion on the lack of information to audit certain costs. The manager directed the auditors to exclude any reference to CAS noncompliance in the audit documentation and to change the audit opinion to “inadequate-in-part.” Based on the more favorable audit opinion, the buying command negotiated a $967 million contract which has since grown to over $1.6 billion through fiscal year 2009. The Defense Criminal Investigative Service is completing a criminal investigation conducted in response to our findings. The DOD IG performed a follow-up audit and confirmed our findings that DCAA’s audit was impaired because of a lack of independence; the audit working papers did not support the reported opinions in the May 8, 2006, proposal audit report; and the draft audit opinion was changed without sufficient documentation. In addition, the DOD IG concluded that the DCAA RAM failed to exercise objective and impartial judgment on significant issues associated with conducting the audit and reporting on the work—a significant independence impairment—and that the RAM did not protect the interests of the government as required by DCAA policy. The DOD IG also concluded that the contractor’s unabsorbed Program Management and Hardware Support (PM&HS) costs represented losses incurred on other contracts and prior accounting periods, including commercial losses—a CAS noncompliance. The DOD IG recommended that the Air Force buying command withhold the balance of $271 million for unabsorbed PM&HS costs (of which $101 million had already been paid) and that the Air Force cease negotiations with the launch services contractor on a $114 million proposal for unabsorbed costs. DCAA is currently performing CAS compliance audits on the commercial satellite launch contract costs. If DCAA determines that the contractor’s costs did not comply with CAS related to unallowable costs, cost accounting period, and allocation of direct and indirect cost, and the FAR related to losses on other contracts, DCAA findings should provide the basis for recovering amounts already paid. For a billing system audit of a contractor with $168 million in annual billings to the government, the field office manager allowed the original auditor to work on the audit after being assured that the auditors would help the contractor correct billing system deficiencies during the performance of the audit. After the original auditor identified 10 significant billing system deficiencies, the manager removed her from the audit and assigned a second auditor who then dropped 8 of the 10 significant deficiencies and reported one significant deficiency and one suggestion to improve the system. The final opinion was “inadequate- in-part.” However, the DCAA field office retained the contractor’s direct billing privileges—a status conveyed to a contractor based on the strength of its billing system controls whereby invoices are submitted directly to the government paying office without prior review. After we brought this to the attention of DCAA western region officials, the field office rescinded the contractor’s direct billing status. Our follow-up audit found that a management environment and agency culture that focused on facilitating the award of contracts and an ineffective audit quality assurance structure are at the root of the DCAA- wide audit failures that we identified for the 69 audits and cost related assignments that we reviewed. DCAA’s focus on a production-oriented mission led DCAA management to establish policies, procedures, and training that emphasized performing a large quantity of audits to support contracting decisions and gave inadequate attention to performing quality audits. An ineffective quality assurance structure, whereby DCAA gave passing scores to deficient audits compounded this problem. Although the reports for all 37 audits of contractor internal controls that we reviewed stated that the audits were performed in accordance with GAGAS, we found GAGAS compliance issues with all of these audits. The issues or themes are consistent with those identified in our prior investigation. Lack of independence. In seven audits, independence was compromised because auditors provided material nonaudit services to a contractor they later audited; experienced access to records problems that were not fully resolved; and significantly delayed report issuance, which allowed the contractors to resolve cited deficiencies so that they were not included in the audit reports. GAGAS state that auditors should be free from influences that restrict access to records or that improperly modify audit scope. Insufficient testing. Thirty-three of 37 internal control audits did not include sufficient testing of internal controls to support auditor conclusions and opinions. GAGAS for examination-level attestation engagements require that sufficient evidence be obtained to provide a reasonable basis for the conclusion that is expressed in the report. For internal control audits, which are relied on for 2 to 4 years and sometimes longer, the auditors would be expected to test a representative selection of transactions across the year and not transactions for just 1 day, 1 month, or a couple of months. However, we found that for many controls, the procedures performed consisted of documenting the auditors’ understanding of controls, and the auditors did not test the effectiveness of the implementation and operation of controls at all. Unsupported opinions. The lack of sufficient support for the audit opinions on 33 of the 37 internal control audits we reviewed rendered them unreliable for decision making on contract awards, direct-billing privileges, the reliability of cost estimates, and reported direct cost and indirect cost rates. Similarly, the 32 cost-related assignments we reviewed did not contain sufficient testing to provide reasonable assurance that overpayments and billing errors that might have occurred were identified. As a result, there is limited assurance that any such errors, if they occurred, were corrected and that related improper contract payments, if any, were refunded or credited to the government. Contractors are responsible for ensuring that their billings reflect fair and reasonable prices and contain only allowable costs, and taxpayers expect DCAA to review these billings to provide reasonable assurance that the government is not paying more than it should for goods and services. Based on our findings that sufficient voucher testing was not performed to support decisions to approve contractors for direct-billing privileges, DCAA recently removed over 200 contractors from the direct-bill program. Production environment and audit quality issues. DCAA’s mission statement, strategic plan, and metrics all focused on producing a large number of audit reports and provided little focus on assuring quality audits that protect taxpayer interest. For example, DCAA’s current approach of performing 30,000 or more audits annually and issuing over 22,000 audit reports with 3,600 auditors substantially contributed to the widespread audit quality problems we identified. Within this environment, DCAA’s audit quality assurance program was not properly implemented, resulting in an ineffective quality control process that accepted audits with significant deficiencies and noncompliance with GAGAS and DCAA policy. Moreover, even when DCAA’s quality assurance documentation showed evidence of serious deficiencies within individual offices, those offices were given satisfactory ratings. Considering the large number of DCAA audit reports issued annually and the reliance the contracting and finance communities have placed on DCAA audit conclusions and opinions, an effective quality assurance program is key to protecting the public interest. Such a program would report review findings along with recommendations for any needed corrective actions; provide training and additional policy guidance, as appropriate; and perform follow-up reviews to assure that corrective actions are taken. GAGAS require that each audit organization performing audits and attestation engagements in accordance with GAGAS should have a system of quality control that is designed to provide the audit organization with reasonable assurance that the organization and its personnel comply with professional standards and applicable legal and regulatory requirements, and have an external peer review at least once every 3 years. On September 1, 2009, the DCAA Director advised us that DCAA needs up to 2 years to revise its current audit approach and establish an adequate audit quality control system before undergoing another peer review. For fiscal year 2008, DOD reported that it obligated over $380 billion for payments to federal contractors, more than double the amount it obligated for fiscal year 2002. With hundreds of billions in taxpayer dollars at stake, the government needs strong controls to provide reasonable assurance that these contract funds are not being lost to fraud, waste, abuse, and mismanagement. Moreover, effective contract audit capacity is particularly important as DOD continues its use of high-risk contracting strategies. For example, we have found numerous issues with DOD’s use of time-and-materials contracts, which are used to purchase billions of dollars of services across the government. Under these types of contracts, payments to contractors are based on the number of labor hours billed at a fixed hourly rate—which includes wages, overhead, and profit—and the cost of any materials. These contracts are considered high risk for the government because the contractor’s profit is tied to the number of hours worked. Because the government bears the responsibility for managing contract costs, it is essential that the government be assured, using DCAA as needed, that the contractor has a good system in place to keep an accurate accounting of the number of hours billed and materials acquired and used. In addition, we have said that DOD needs to improve its management and oversight of undefinitized contract actions, under which DOD can authorize contractors to begin work and incur costs before reaching a final agreement on contract terms and conditions, including price. These contracts are high risk because the contractor has little incentive to control costs while the contract remains undefinitized. In one case, we found that the lack of timely negotiations on a task order issued to restore Iraq’s oil infrastructure increased the government’s risk when DOD paid the contractor nearly all of the $221 million in costs questioned by DCAA. More timely negotiations, including involvement by DCAA, could have reduced the risk to the government of possible overpayment. DCAA initiated a number of actions to address findings in our July 2008 report as well as findings from DOD follow-up efforts, including the DOD Comptroller/Chief Financial Officer (CFO) August 2008 “tiger team” review and the Defense Business Board study, which was officially released in January 2009. Examples of recent DCAA and DOD actions include the following. Eliminating production metrics and implementing new metrics intended to focus on achieving quality audits. Establishing an anonymous Web site to address management and hotline issues. In addition, DCAA’s Assistant Director for Operations has been proactive in handling internal DCAA Web site hotline complaints. Revising policy guidance to address auditor independence, assure management involvement in key decisions, and address audit quality issues. DCAA also took action to halt auditor participation in nonaudit services that posed independence concerns. DCAA also has enlisted assistance from other agencies to develop a human capital strategic plan, assist in cultural transformation, and conduct a staffing study. Further, in March 2009, the new DOD Comptroller/CFO established a DCAA Oversight Committee to monitor and advise on DCAA corrective actions. While these are positive steps, much more needs to be done to address fundamental weaknesses in DCAA’s mission, strategic plan, metrics, audit approach, and human capital practices that have resulted in widespread audit quality problems. DCAA’s production-oriented culture is deeply imbedded and will likely take several years to change. DCAA’s mission focused primarily on producing reports to support procurement and contracting community decisions with no mention of quality audits that serve taxpayer interest. Further, DCAA’s culture has focused on hiring at the entry level and promoting from within the agency and most training has been conducted by agency staff, which has led to an insular culture where there are limited perspectives on how to make effective organizational changes. To address these issues, our September 2009 report contained 15 recommendations to improve the quality of DCAA’s audits and strengthen auditor effectiveness and independence. Key GAO recommendations relate to the need for DCAA to develop a risk- based audit approach and develop a staffing plan in order to match audit priorities to available resources. To develop an effective risk-based audit approach, DCAA will need to work with key DOD stakeholders to determine the appropriate mix of audit and nonaudit services it should perform and determine what, if any, of these responsibilities should be transferred or reassigned to another DOD agency or terminated in order for DCAA to comply with GAGAS requirements. We also made recommendations for DCAA to establish in-house expertise or obtain outside expertise on auditing standards to (1) assist in revising contract audit policy, (2) provide guidance on sampling and testing, and (3) develop training on professional auditing standards. In addition, we recommended that DOD conduct an independent review of DCAA’s revised audit quality assurance program and follow-up to assure that appropriate corrective actions are taken. Mr. Chairman and Members of the Panel, this concludes my statement. We would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Major contributors to our testimony include William T. Woods, Director, Acquisition and Sourcing Management; F. Abe Dymond, Assistant General Counsel; Gayle L. Fischer, Assistant Director; Financial Management and Assurance; Richard Cambosos; Jeremiah Cockrum; Shawnda Lindsey; Andrew McIntosh; Lerone Reid, and Angela Thomas. DOD’s High-Risk Areas: Actions Needed to Reduce Vulnerabilities and Improve Business Outcome, GAO-09-460T, Washington, D.C.: March 12, 2009. High-Risk Series: An Update, GAO-09-271, Washington, D.C.: January 2009. DCAA Audits: Widespread Problems with Audit Quality Require Significant Reform, GAO-09-468, Washington, D.C.: Sept. 23, 2009. DCAA Audits: Widespread Problems with Audit Quality Require Significant Reform, GAO-09-1009T, Washington, D.C.: Sept. 23, 2009. DCAA Audits: Allegations That Certain Audits at Three Locations Did Not Meet Professional Standards Were Substantiated, GAO-08-993T, Washington, D.C.: Sept. 10, 2008. DCAA Audits: Allegations That Certain Audits at Three Locations Did Not Meet Professional Standards Were Substantiated, GAO-08-857, Washington, D.C.: July 22, 2008. Contract Management: Minimal Compliance with New Safeguards for Time-and-Materials Contracts for Commercial Services and Safeguards Have Not Been Applied to GSA Schedules Program, GAO-09-579, Washington, D.C.: June 24, 2009. Defense Acquisitions: Charting a Course for Lasting Reform, GAO-09-663T, Washington, D.C.: April 30, 2009. Defense Management: Actions Needed to Overcome Long-standing Challenges with Weapon Systems Acquisition and Service Contract Management, GAO-09-362T, Washington, D.C.: Feb. 11, 2009. Defense Acquisitions: Perspectives on Potential Changes to Department of Defense Acquisition Management Framework, GAO-09-295R, Washington, D.C.: February 27, 2009. Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges, GAO-08-1039, Washington, D.C.: September 26, 2008. Defense Contracting: Post-Government Employment of Former DOD Officials Needs Greater Transparency, GAO-08-485, Washington, D.C.: May 21, 2008. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists, GAO-08-360, Washington, D.C.: March 26, 2008. Defense Contracting: Additional Personal Conflict of Interest Safeguards Needed for Certain DOD Contractor Employees, GAO-08-169, Washington, D.C.: March 7, 2008. Defense Contract Management: DOD’s Lack of Adherence to Key Contracting Principles on Iraq Oil Contract Put Government Interests at Risk, GAO-07-839, Washington, D.C.: July 31, 2007. Defense Contracting: Improved Insight and Controls Needed over DOD’s Time-and-Materials Contracts, GAO-07-273, Washington, D.C.: June 29, 2007. Defense Contracting: Use of Undefinitized Contract Actions Understated and Definitization Time Frames Often Not Met, GAO-07-559, Washington, D.C.: June 19, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services, GAO-07-832T, Washington, D.C.: May 10, 2007. Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes, GAO-07-20, Washington, D.C.: November 9, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2008, the Department of Defense (DOD) obligated over $380 billion to federal contractors, more than doubling the amount it obligated in fiscal year 2002. With hundreds of billions of taxpayer dollars at stake, the government needs strong controls to provide reasonable assurance that contract funds are not being lost to fraud, waste, abuse, and mismanagement. The Defense Contract Audit Agency (DCAA) is charged with a critical role in contractor oversight by providing auditing, accounting, and financial advisory services in connection with DOD and other federal agency contracts and subcontracts. However, last year GAO found numerous problems with DCAA audit quality at three locations in California, including the failure to meet professional auditing standards. In a follow-up audit issued this September, GAO found that these problems existed agencywide. Today's testimony describes widespread audit quality problems at DCAA and provides information about continuing contract management challenges at DOD, which underscore the importance of DCAA audits that meet professional standards. It also discusses some of the corrective actions taken by DCAA and DOD and key GAO recommendations to improve DCAA audit quality. In preparing this testimony, GAO drew from issued reports and testimonies. These products contained statements regarding the scope and methodology GAO used. GAO found substantial evidence of widespread audit quality problems at DCAA. In the face of this evidence, DOD, Congress, and American taxpayers lack reasonable assurance that billions of dollars in federal contract payments are being appropriately scrutinized for fraud, waste, abuse, and mismanagement. An initial investigation of hotline allegations at three DCAA field office locations in California revealed that all 14 audits and 62 forward pricing reports GAO examined were not performed in accordance with professional auditing standards. For example, while auditing the satellite launch proposal for a major U.S. defense contractor, a DCAA manager experienced pressure from the contractor and the DOD buying command to drop adverse findings. The manager directed his auditors to drop the findings, and DCAA issued a more favorable opinion, allowing the contractor to win a contract that improperly compensated the contractor for hundreds of millions of dollars in commercial business losses. Specifically, of $271 million in unallowable costs related to commercial losses, the contractor has already been paid $101 million. This incident is under criminal investigation by the DOD Inspector General (IG). In September of this year, GAO followed up on its initial investigation and identified audit quality problems agencywide at DCAA. Audit quality problems included insufficient audit testing, inadequate planning and supervision, and the compromise of auditor independence. For example, of the 69 audits and cost-related assignments GAO reviewed, 65 exhibited serious deficiencies that rendered them unreliable for decisions on contract awards, management, and oversight. DCAA has rescinded 81 audit reports to date as a result of GAO's and DOD IG's work. Because the rescinded reports were used to assess risk in planning subsequent audits, they affect the reliability of hundreds of other audits and contracting decisions covering billions of dollars in DOD contract expenditures. GAO determined that quality problems are widespread because DCAA's management environment and quality assurance structure were based on a production-oriented mission that prevented DCAA from protecting the public interest while also facilitating DOD contracting. GAO has designated both contract management and weapon systems acquisition as high-risk areas since the early 1990s. DOD acquisition and contract management weaknesses create vulnerabilities to fraud, waste, abuse, and mismanagement that leave hundreds of billions of taxpayer dollars at risk, and underscore the importance of a strong contract audit function. In response to GAO's findings and recommendations, DCAA has taken several steps to improve metrics, policies, and processes, and the DOD Comptroller has established a DCAA oversight committee. To ensure quality audits for contractor oversight and accountability, DOD and DCAA will also need to address the fundamental weaknesses in DCAA's mission, strategic plan, metrics, audit approach, and human capital practices that have had a detrimental effect on audit quality.
We substantiated the allegations and auditor concerns made on each of the 13 cases we investigated, involving 14 audits at two locations and forward pricing audit issues at a third location. The 13 cases related to seven contractors. In the 12 cases at locations 1 and 2, we substantiated the allegations and auditor concerns that (1) workpapers did not support reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate audit evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. We also found that contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions, and opinions of some audits—a serious independence issue. We also substantiated allegations of problems with the audit environment and inadequate supervision of certain forward pricing audits at location 3. Moreover, during our investigation, DCAA managers took actions against their staff at two locations that served to intimidate auditors and create an abusive work environment. DCAA states that its audits are performed according to professional standards (GAGAS). However, in substantiating the allegations, we found numerous failures to comply with these standards in all 13 cases we investigated. The working papers did not adequately support the final conclusion and opinion for any of the 14 audits we investigated. In many cases, supervisors changed audit opinions to indicate contractor controls or compliance with CAS was adequate when workpaper evidence indicated that significant deficiencies existed. We also found that in some cases, DCAA auditors did not perform sufficient work to support draft audit conclusions and their supervisors did not instruct or allow them to perform additional work before issuing final reports that concluded contractor controls or compliance with CAS were adequate. At location 1, we also found undue contractor influence that impaired auditor independence. At location 2, two supervisors were responsible for the 12 audits we investigated, and 11 of these audits involved insufficient work to support the reported opinions. At location 3, we substantiated allegations about inadequate supervision of trainees, reports being issued without final supervisory review, and contracting officer pressure to issue reports before audit work was completed in order to meet contract negotiation time frames—a serious independence issue. Noncompliance with GAGAS in the cases we investigated has had an unknown financial effect on the government. Because DCAA auditors’ limited work identified potential significant deficiencies in contractor systems and accounting practices that were not analyzed in sufficient detail to support reportable findings and recommendations for corrective action, reliance on data and information generated by the audited systems could put users and decision makers at risk. Tables summarizing our findings for all the audits can be found in appendixes I and II. The following examples illustrate problems we found at two DCAA locations: In conducting a 2002 audit related to a contractor estimating system, DCAA auditors reviewed draft basis of estimates (BOE) prepared by the contractor and advised the contractor on how to correct significant deficiencies. BOEs are the means for providing government contract officials with information critical to making contract pricing decisions. This process resulted from an up-front agreement between the DCAA resident auditor and the contractor—one of the top five government contractors based on contract dollar value—that limited the scope of work and established the basis for the audit opinion. According to the agreement, the contractor knew which BOEs would be selected for audit and the audit opinion would be based on the final, corrected BOEs after several DCAA reviews. Even with this BOE review effort, the auditors found that the contractor still could not produce compliant BOEs and labeled the estimating system “inadequate in part.” We found that enough evidence had been collected by the original supervisory auditor and senior auditor to support this opinion. However, after the contractor objected to draft findings and conclusions presented at the audit exit conference, the DCAA resident auditor replaced the original supervisory auditor assigned to this audit and threatened the senior auditor with personnel action if he did not change the summary workpaper and draft audit opinion. The second supervisory auditor issued the final report with an “adequate” opinion without documenting adequate support for the changes. This audit did not meet GAGAS for auditor objectivity and independence because of the up-front agreement, and it did not meet standards related to adequate support for audit opinions. The draft report for a 2005 billing system audit identified six significant deficiencies, one of which allowed the contractor to overbill the government by $246,000 and another that may have led to $3.5 million in overbillings. DCAA managers replaced the supervisory auditor and auditor, and the new staff worked together to modify working papers and change the draft audit opinion from “inadequate,” to “inadequate in part,” and, finally, to “adequate.” Sufficient testing was not documented to support this opinion. The DOD IG concluded that DCAA should rescind the final report for this audit, but DCAA did not do so. Billing system audits are conducted to assess contractor controls for assuring that charges to the government are appropriate and compliant and to support decisions on whether to approve contractors for direct billing. As a result of the 2005 audit, DCAA authorized this contractor for direct billing of its invoices without prior government review, thereby providing quicker payments and improved cash flow to the contractor. On June 20, 2008, when we briefed DOD on the results of our investigation, DCAA advised us that a DCAA Western Region review of this audit in 2008 concluded that the $3.5 million finding was based on a flawed audit procedure. As a result, it rescinded the audit report on May 22, 2008. However, DCAA officials said that they did not remove the contractor’s direct-billing privileges because other audits did not identify billing problems. The draft report for a 2005 CAS 403 compliance audit requested by a Department of Energy administrative contracting officer (ACO) identified four deficiencies related to corporate cost allocations to government business segments. However, a DCAA supervisory auditor directed a member of her staff to write a “clean opinion” report in 1 day using “boilerplate” language and without reviewing the existing set of working papers developed by the original auditor. The supervisory auditor appropriately dropped two significant deficiencies from the draft report, but did not adequately document the changes in the workpapers. In addition, the supervisory auditor improperly referred two other significant deficiencies to another DCAA office that does not have audit jurisdiction, and therefore, did not audit the contractor’s corporate costs or CAS 403 compliance. The final opinion was later contradicted by a September 21, 2007, DCAA report that determined that this contractor was in fact not in compliance with CAS 403 during the period of this audit. We also substantiated allegations that there were problems with the audit environment at a third DCAA location—a resident office responsible for auditing another of the five largest government contractors. For example, the two supervisors, who approved and signed 62 of the 113 audit reports performed at the resident office location during fiscal years 2004 through 2006, said that trainees were assigned to complex forward pricing audits as their first assignments even though they had no institutional knowledge about the type of materials at risk of overcharges, how to look at related sources of information for cost comparisons, or how to complete the analysis of complex cost data required by FAR. The supervisors, who did not always have the benefit of experienced auditors to assist them in supervising the trainees, admitted that they generally did not review workpapers in final form until after reports were issued. Moreover, because the trainee auditors did not have an adequate understanding of DCAA’s electronic workpaper filing system, they did not always enter completed workpapers in the system, resulting in a loss of control over official workpapers. In addition, one of the two supervisory auditors told us that contracting officers would sometimes tell auditors to issue proposal audit reports in as few as 20 days with whatever information the auditor had at that time and not to cite a scope limitation in the audit reports, so that they could begin contract negotiations. If the available information was insufficient, GAGAS would have required the auditors to report a scope limitation. Where scope limitations existed, but were not reported, the contracting officers could have negotiated contracts with insufficient information. Moreover, a 2006 DCAA Western Region quality review reported 28 systemic deficiencies on 9 of 11 forward pricing audits reviewed, including a lack of supervisory review of the audits. The problems at this location call into question the reliability of the 62 forward pricing audit reports issued by the two supervisors responsible for forward pricing audits at the resident office location from fiscal years 2004 through 2006, connected with over $6.4 billion in government contract negotiations. Throughout our investigation, auditors at each of the three DCAA locations told us that the limited number of hours approved for their audits directly affected the sufficiency of audit testing. At the third DCAA location we investigated, two former supervisory auditors told us that the volume of requests for the audits, short time frames demanded by customers for issuing reports to support contract negotiations (e.g., 20 to 30 days), and limited audit resources affected their ability to comply with GAGAS. Our review of DCAA performance data showed that DCAA measures audit efficiency and productivity as a factor of contract dollars audited divided by audit hours. In addition, because customer-requested assignments—such as forward pricing audits requested by contracting officers—which are referred to as demand work by DCAA, take priority, other work, such as internal control and CAS compliance audits, are often performed late in the year. Auditors told us that there is significant management pressure to complete these nondemand audits by the end of the fiscal year to meet field audit office (FAO) performance plans. During the DOD IG and GAO investigations, we identified a pattern of frequent management actions that served to intimidate the auditors and create an abusive environment at two of the three locations covered in our investigation. In this environment, some auditors were hesitant to speak to us even on a confidential basis. For example, supervisory auditors and the branch manager at one DCAA location we visited pressured auditors, including trainees who were in probationary status, to disclose to them what they told our investigators. Some probationary trainees told us this questioning made them feel pressured or uncomfortable. Further, we learned of verbal admonishments, reassignments, and threats of disciplinary action against auditors who raised questions about management guidance to omit their audit findings and change draft opinions or who spoke with or contacted our investigators, DOD investigators, or DOD contracting officials. We briefed cognizant DCAA region and headquarters officials on the results of our investigation in February 2008 and reviewed additional documentation they provided. We briefed DOD and DCAA officials on the results of our investigation on June 20 and 25, 2008. We summarized DCAA’s comments on our corrective action briefing in our investigative report, and we included relevant details of DCAA’s comments at the end of our case discussions. In response to our investigation, DCAA rescinded two audit reports and removed a contractor’s direct billing authorization related to a third audit. DCAA also performed subsequent audits related to three additional cases that resulted in audit opinions that contradicted previously reported adequate (“clean”) opinions and included numerous significant deficiencies. For other cases, DCAA officials told us that although workpaper documentation could have been better, on the basis of other audits DCAA performed, they do not believe the reported opinions were incorrect or misleading. In the cases we investigated, pressure from the contracting community and buying commands for favorable opinions to support contract negotiations impaired the independence of three audits involving two of the five largest government contractors. In addition, DCAA management pressure to (1) complete audit work on time in order to meet performance metrics and (2) report favorable opinions so that work could be reduced on future audits and contractors could be approved for direct billing privileges led the three DCAA FAOs to take inappropriate short cuts— ultimately resulting in noncompliance with GAGAS and internal DCAA CAM guidance. Although it is important for DCAA to issue products in a timely manner, the only way for auditors to determine whether “prices paid by the government for needed goods and services are fair and reasonable” is by performing sufficient audit work to determine the adequacy of contractor systems and related controls, and their compliance with laws, regulations, cost accounting standards, and contract terms. Further, it is important that managers and supervisory auditors at the three locations we investigated work with their audit staff to foster a productive, professional relationship and assure that auditors have the appropriate training, knowledge, and experience. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact me at 202- 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Major contributors to this testimony include Gayle L. Fischer, Assistant Director; Andrew O’Connell, Assistant Director and Supervisory Special Agent; F. Abe Dymond, Assistant General Counsel; Richard T. Cambosos; Jeremiah F. Cockrum; Andrew J. McIntosh; and Ramon J. Rodriguez, Senior Special Agent. The DCAA resident office and contractor made an up-front agreement on audit scope, which had the effect of predetermining an “adequate” audit opinion. On the basis of pressure from contractor and buying command to resolve CAS compliance issues and issue a favorable opinion, a DCAA region official directed the auditors not to include CAS compliance problems in the audit workpapers. Branch manager and supervisory auditor terminated audit work and issued opinions without sufficient documentation based on their view that defective pricing did not exist on the related contracts. Supervisory auditor dropped preliminary findings based on a flawed audit procedure instead of requiring auditors to perform sufficient testing to conclude on the adequacy of billing system controls. Auditor was excluded from the exit conference, findings were dropped without adequate support, and supervisor made contradictory statements on her review of the audit. Dropped findings on corporate accounting were referred to another field audit office (FAO), which does not review corporate costs. Supervisor prepared and approved key working papers herself, without required supervisory review. Supervisor directed another auditor to write a clean opinion report without reviewing the working papers. Supervisor then changed the working papers without support and referred two dropped findings to another FAO, which does not review corporate overhead allocations. Inexperienced trainees assigned to complex forward pricing audits without proper supervision. Reports issued with unqualified opinions before supervisory review was completed due to pressure from contracting officers. Significant deficiency and FAR noncompliance related to the lack of contractor job descriptions for executives not reported. Significant deficiency related to subcontract management not reported. Second auditor and supervisor dropped 6 of 10 significant deficiencies without adequate documentation to show that identified weaknesses were resolved. Supervisor identified problems with test methodology but dropped findings instead of requiring tests to be reperformed. Second auditor and supervisor deleted most audit steps and performed limited follow-up work that did not support the reported opinion of overall compliance with CAS. Purpose of audit was to review the corrective action plan (CAP) developed by Contractor A in response to prior findings of inadequate basis of estimates (BOE) related to labor hours. In the face of pressure from DOD’s contracting community to approve Contractor A’s estimating system, we found evidence there was an up-front agreement between DCAA and Contractor A to limit the scope of work and basis of the audit opinion (a significant impairment of auditor independence). Auditors found significant deficiencies with the CAP implementation plan, that is, the contractor could not develop compliant BOEs without DCAA’s assistance at the initial, intermediate, and final stages of the estimates. Original supervisory auditor was reassigned; the resident auditor and new supervisory auditor directed the draft opinion be changed from “inadequate in part” to “adequate” after the contractor objected to DCAA draft findings and opinion. The working papers did not contain audit evidence to support the change in opinion. Field office management threatened the senior auditor with personnel action if he did not change the draft audit opinion to “adequate.” Audit related to a revised proposal submitted after DCAA reported an adverse (inadequate) opinion on Contractor A’s 2005 proposal. At beginning of the audit, buying command and Contractor A officials met with a DCAA regional audit manager to determine how to resolve cost accounting standard (CAS) compliance issues and obtain a favorable audit opinion. Contractor A did not provide all cost information requested for audit. Contrary to DCAA Contract Audit Manual guidance, the regional audit manager instructed auditors that they could not base an “adverse” (inadequate) audit opinion on the lack of information to audit certain costs. On the basis of an “inadequate in part” opinion reported in May 2006, the buying command negotiated a $937 million contract, which has grown to $1.2 billion. Branch manager and supervisory auditor predetermined that there was no defective pricing; however, the auditor concluded that Contractor B’s practice potentially constituted defective pricing and obtained technical guidance that specific contracts would need to be analyzed to make a determination. The branch manager disagreed. Supervisory auditor and branch manager subsequently issued three reports stating that Contractor B’s practice at three divisions did not constitute defective pricing. Insufficient work was performed on these audits to come to any conclusion about defective pricing and as a result, the final opinions on all three audit reports are not supported. Absent DCAA audit support for defective pricing, the contracting officer pursued a CAS 405 noncompliance at 3 contractor divisions and recovered $71,000. On July 17, 2008, Contractor B settled on a Defense Criminal Investigative Service defective pricing case for $620,900. Draft audit report identified six significant deficiencies, one of which led Contractor C to overbill the government by $246,000 and another which potentially led to $3.5 million in overbillings, but audit work was incomplete. The contractor had refunded the $246,000. The original auditor reported that the $3.5 million was for subcontractor costs improperly billed to the government. The supervisor deleted the finding based on a flawed audit procedure, but did not require additional testing. First supervisory auditor and auditor were replaced after draft audit was completed. New auditor and supervisory auditor worked together to modify working papers and alter draft audit opinion from “inadequate,” to “inadequate in part,” and, finally, to “adequate.” Sufficient testing was not performed to determine if the contractor had systemic weaknesses or to support an opinion that contractor billing system controls were adequate. On the basis of the “adequate” opinion, the field audit office (FAO) approved the contractor for direct billing. DOD IG recommended that DCAA rescind the final report for this audit, but DCAA did not do so. Following the briefing on our investigation, the DCAA Western Region rescinded the audit report on May 22, 2008. Auditor identified five deficiencies and concluded the contractor’s system was “inadequate in part.” Auditor did not perform sufficient work to support some findings, but supervisory auditor did not direct the auditor to gather additional evidence. After consulting with the branch manager, the supervisory auditor modified documents and eliminated significant deficiencies, changing the draft audit opinion from “inadequate in part” to “adequate.” Working papers did not properly document the reason for the change in opinion and therefore do not support the final opinion. DOD IG recommended that DCAA rescind the final report for this audit, but DCAA did not do so. On June 27, 2008, the DCAA Western Region informed us that it was rescinding this audit report. Auditor believed audit evidence related to a 24 percent error rate in a small sample of cost pools supported an “inadequate in part” opinion and suggested testing be expanded, but supervisory auditor disagreed. Auditor and supervisory auditor documented their disagreement in the working papers. Supervisory auditor subsequently modified documents to change the draft audit opinion from “inadequate in part” to “adequate” before issuing the final report. Certain final working papers were prepared and approved by the supervisory auditor, without proper supervision. Branch manager and supervisory auditor determined that findings of corporate accounting problems should be referred to another FAO for future audit. However, the other FAO does not audit corporate costs. Working papers do not support the final opinion. Auditor identified four potential instances of noncompliance with CAS 403. Auditor was transferred to a different team before supervisory review of her working papers. Three months later, the supervisory auditor requested that another auditor write a “clean (“adequate”) opinion” report. Second auditor used “boilerplate” (i.e., standardized) language to write the final report and never reviewed the working papers. The supervisor correctly deleted two findings and referred two findings of corporate-level non-compliances to another FAO for future audit. The other FAO does not audit corporate-level costs. Working papers do not support the final “clean opinion,” which was later contradicted by a September 21, 2007, DCAA report that determined Contractor D was in fact not in compliance with CAS 403 during the period of this audit. Two location 3 supervisors issued 62 forward pricing audits related to Contractor E between 2004 and 2006. Supervisors responsible for the 62 forward pricing audits admitted to us that they did not have time to review working papers before report issuance. According to the DCAA region, inexperienced trainee auditors were assigned to 18 of the 62 audits without proper supervision. However, the region did not provide assignment documentation for the 62 audits. An internal DCAA Region audit quality review found audits where the audit working papers did not support the final audit report, working paper files were lost, and working paper files were not archived in the DCAA-required time period. The 62 forward pricing audits were connected with over $6.4 billion in government contract negotiations. Three different auditors worked on this audit. Original auditor did not follow DCAA guidance when developing the audit plan and was reassigned after audit work began. Second auditor lacked experience with compensation system audits and noted in her working papers that she was “floundering” and could not finish the audit by the September 30, 2005, deadline. Third auditor was assigned 10 calendar days before the audit was due to be completed. Although audit was issued with an “adequate” opinion, insufficient work was performed on this audit and, therefore, working papers do not support the final opinion. Significant system deficiencies noted in the working papers were not reported. The DOD Office of Inspector General recommended that DCAA rescind the final report for this audit, but DCAA did not do so. Instead, DCAA initiated another audit during 2007. DCAA agreed with our finding that this audit did not include sufficient testing of executive compensation. In June 2008, the branch office issued a new audit report on Contractor D’s compensation system which identified seven significant deficiencies and an “inadequate in part” opinion. DCAA stated that it is currently assessing the impact of these deficiencies on current incurred cost audits. Auditor found that the contractor was not fulfilling its FAR-related obligations to ensure that subcontractors’ cost claims were audited. This issue was not reported as a significant deficiency in the contractor’s purchasing system. The opinion on the system was “adequate.” The working papers did not include sufficient evidence to support the final opinion. DCAA relied on a 2004 Defense Contract Management Agency (DCMA) review in which the conclusions were based word-for-word on the contractor’s response to a questionnaire without independent testing of controls. DCAA stated that the overall opinion was not based on DCMA’s review. However, DCAA stated that it will address the issue of the contractor’s procedures for ensuring subcontract audits are performed during the next purchasing system audit, which is expected to be completed by December 30, 2008. The branch manager allowed the original auditor to work on this audit after being assured that the auditors would help the contractor correct any billing system deficiencies during the performance of the audit. After the original auditor identified 10 significant billing system deficiencies, the branch manager removed her from the audit and assigned a second auditor to the audit. With approval by the FAO and region management, the second auditor dropped 8 of the 10 significant deficiencies and reported 1 significant deficiency and 1 suggestion to improve the system. The final opinion was “inadequate in part.” Six of the findings were dropped without adequate support, including a finding that certain contract terms were violated and a finding that the contractor did not audit subcontract costs. Despite issuing an “inadequate in part” opinion, the FAO decided to retain the contractor’s direct-billing privileges. After we brought this to the attention of region officials, the FAO rescinded the contractor’s direct billing status in March 2008. DCAA did not agree with our finding that the working papers did not contain adequate support for dropping six draft findings of significant deficiencies. Auditor performed sampling to determine whether sufficient controls over employee timecards existed. Although the work was based on a limited judgmental sample, the auditor found 3 errors out of 18 employee timecards tested and concluded that controls over timecards were inadequate. Supervisory auditor initially agreed with the findings, but later modified working papers to change the draft audit conclusion from “certain labor practices require corrective actions” to “no significant deficiencies.” Working papers did not properly document the reason for the change in conclusion and, therefore, do not support the final audit conclusion. Supervisory auditor later stated that the initial sampling plan was flawed, but eliminated the deficiency finding rather than asking the auditor to redo the work. On April 9, 2008, DCAA issued a new audit report which identified 8 significant deficiencies and concluded that corrective actions were needed on the contractor’s labor accounting system. After original auditor was transferred to another audit, a second auditor significantly limited the scope of the audit with supervisory approval, deleting most of the standard audit steps. Second auditor performed very limited testing and relied on contractor assertions with little or no independent verification. Supervisory auditor approved issuance of the final audit with an opinion that the contractor complied with CAS 418 in all material respects. Insufficient work was performed on this audit and, therefore, the scope of work and the working paper documentation does not support the opinion. Region officials acknowledged that work was insufficient and stated that another CAS 418 audit was initiated; however, DCAA did not rescind the misleading report. On June 25, 2008, DCAA officials told us that the new CAS 418 audit was completed with an “adequate” opinion. Location 2 is a DCAA branch office. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Defense Contract Audit Agency (DCAA) under the Department of Defense (DOD) Comptroller plays a critical role in contractor oversight by providing auditing, accounting, and financial advisory services in connection with DOD and other federal agency contracts and subcontracts. DCAA has elected to follow generally accepted government auditing standards (GAGAS). These standards provide guidelines to help government auditors maintain competence, integrity, objectivity, and independence in their work. GAO investigated hotline complaints it received related to alleged failures to comply with GAGAS on 14 DCAA audits. Specifically, it was alleged that (1) working papers did not support reported opinions, (2) supervisors dropped findings and changed audit opinions without adequate evidence, and (3) sufficient work was not performed to support audit conclusions and opinions. GAO also investigated issues related to the quality of certain forward pricing audit reports. GAO investigators interviewed over 50 individuals, reviewed working papers and related documents for 14 audits issued from 2003 through 2007 by two DCAA offices, and reviewed documentation on audit issues at a third DCAA office. GAO did not reperform the audits to validate the completeness and accuracy of DCAA's findings. DCAA did not agree with the "totality" of GAO's findings, but it did acknowledge shortcomings with some audits and agreed to take certain corrective actions. GAO substantiated the allegations. Although DCAA policy states that its audits are performed according to GAGAS, GAO found numerous examples where DCAA failed to comply with GAGAS in all 13 cases. For example, contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions, and opinions on three cases--a serious independence issue. At two DCAA locations, GAO found evidence that (1) working papers did not support reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. GAO also substantiated allegations of inadequate supervision of certain audits at a third DCAA location. The table below contains selected details about three cases GAO investigated. Throughout GAO's investigation, auditors at each of the three DCAA locations told us that the limited number of hours approved for their audits directly affected the sufficiency of audit testing. Moreover, GAO's investigation identified a pattern of frequent management actions at two locations that served to intimidate auditors, discourage them from speaking with investigators, and create a generally abusive work environment.
On November 25, 2002, the President signed into law the Homeland Security Act, which created the new federal Department of Homeland Security, and the Maritime Transportation Security Act, which created a consistent security program specifically for the nation’s seaports. Since that time, and in keeping with the provisions of these new laws, the federal government has been developing a variety of new national policies and procedures for improving the nation’s response to domestic emergencies. These policies and procedures are designed to work together to provide a cohesive framework for preparing for, responding to, and recovering from domestic incidents. A key element of this new response framework is the use of exercises to test and evaluate federal agencies’ policies and procedures, response capabilities, and skill levels. The Coast Guard has primary responsibility for such testing and evaluation in the nation’s ports and waterways, and as part of its response, it has added multiagency and multicontingency terrorism exercises to its training program. These exercises vary in size and scope and are designed to test specific aspects of the Coast Guard’s terrorism response plans, such as communicating with state and local responders, raising maritime security levels, or responding to incidents within the port. For each exercise the Coast Guard conducts, an after- action report detailing the objectives, participants, and lessons learned must be produced within 60 days. The framework under which federal agencies would coordinate with state and local entities to manage a port-terrorism incident is still evolving. As directed by Homeland Security Presidential Directive/HSPD-5, issued in February 2003, this framework is designed to address all types of responses to national emergencies, not just port-related events. Key elements of the framework have been released over the past 2 years. For example, the Department of Homeland Security released the Interim National Response Plan in September 2003 and was in the final approval stage for a more comprehensive National Response Plan in November 2004, as our work was drawing to a close. DHS announced the completion of the National Response Plan on January 6, 2005, too late for a substantive review to be included in this report. However, the finalized plan is designed to be the primary operational guidance for incident management and, when fully implemented, will incorporate or supersede existing federal interagency response plans. According to the updated implementation schedule in the National Response Plan, federal agencies will have up to 120 days to bring their existing plans, protocols, and training into accordance with the new plan. In March 2004, the department also put in place a system, called the National Incident Management System, which requires common principles, structures, and terminology for incident management and multiagency coordination. Although the framework that will be brought about by the final plan, the management system, and other actions is still in the implementation phase, some of the protocols and procedures contained in this framework were already evident at the port exercises we observed. However, it is still too early to determine how well the complete framework will function in coordinating an effective response to a port-related threat or incident. Port security exercises have identified relatively few issues related to federal agencies’ legal authority, and none of these issues were statutory problems according to exercise participants and agency officials. Our review of fiscal year 2004 after-action reports and observation of specific exercises showed that exercise participants encountered seven legal issues, but exercise participants and agency officials we interviewed did not recommend statutory changes to address these issues. In three instances, exercise participants made nonstatutory recommendations (such as policy clarifications) to assist agencies in better exercising their authority, but did not question the adequacy of that authority. In the other four instances, no recommendations were made either because statutory authority was deemed sufficient or, in one case, because the issue involved a constitutional restraint (i.e., under the Fourth Amendment, police are prohibited from detaining passengers not suspected of terrorism). While the exercises were conducted to examine a wide range of issues and not specifically to identify gaps in agencies’ legal authority, the results of the exercises are consistent with the information provided by agency officials we interviewed, who indicated that sufficient statutory authority exists to respond to a terrorist attack at a seaport. Moreover, when Department of Homeland Security officials reviewed the issue of statutory authority, as required by Homeland Security Presidential Directive/HSPD- 5, they concluded that federal agencies had sufficient authority to implement the National Response Plan and that any implementation issues could be addressed by nonstatutory means, such as better coordination mechanisms. Most of the issues identified in port security exercises have been operational rather than legal in nature. Such issues appeared in most after- action reports we reviewed and in all four of the exercises we observed. While such issues are indications that improvements are needed, it should be pointed out that the primary purpose of the exercises is to identify matters that need attention and that surfacing problems is therefore a desirable outcome, not an undesirable one. The operational issues can be divided into four main categories, listed in descending order of frequency with which they were reported: Communication—59 percent of the exercises raised communication issues, including problems with interoperable radio communications among first responders, failure to adequately share information across agency lines, and difficulties in accessing classified information when needed. Adequacy or coordination of resources—54 percent of the exercises raised concerns with the adequacy or coordination of resources, including inadequate facilities or equipment, differing response procedures or levels of acceptable risk exposure, and the need for additional training in joint agency response. Ability of participants to coordinate effectively in a command and control environment—41 percent of the exercises raised concerns related to command and control, most notably a lack of knowledge or training in the incident command structure. Lack of knowledge about who has jurisdictional or decision-making authority—28 percent of the exercises raised concerns with participants’ knowledge about who has jurisdiction or decision-making authority. For example, agency personnel were sometimes unclear about who had the proper authority to raise security levels, board vessels, or detain passengers. Our review of the Coast Guard’s fiscal year 2004 after-action reports from port terrorism exercises identified problems with timeliness in completing the reports and limitations in the information they contained. Specifically, Timeliness: Coast Guard guidance states that after-action reports are an extremely important part of the exercise program, and the guidance requires that such reports be submitted to the after-action report database (Contingency Preparedness System) within 60 days of completing the exercise. However, current practice falls short: 61 percent of the 85 after- action reports were not submitted within this 60-day time frame. Late reports were submitted, on average, 61 days past the due date. Exercises with late reports include large full-scale exercises designed to identify major interagency coordination and response capabilities. Not meeting the 60-day requirement can lessen the usefulness of these reports. Coast Guard guidance notes, and officials confirm, that exercise planners should regularly review past after-action reports when planning and designing future exercises, and to the extent that reports are unavailable, such review cannot be done. In previous reviews of exercises conducted by the Coast Guard and others, we found that timely after-action reports were necessary to help ensure that potential lessons can be learned and applied after each counterterrorism exercise. The main problem in producing reports on a more timely basis appeared to be one of competing priorities: Coast Guard field personnel indicated that other workload priorities were an impediment to completing reports, but most of them also said 60 days is a sufficient amount of time to develop and submit an after-action report. Officials cited the development of the Contingency Preparedness System, which is the program for managing exercises and after-action reports, as a step allowing for a renewed emphasis on timeliness. Headquarters planning staff are able to run reports using this system and regularly notify key Coast Guard officials of overdue after-action reports. However, this system was implemented more than 1 year ago, in August 2003, and was, therefore, in place during the period in which we found a majority of after- action reports were late. We did not compare our results with timeliness figures for earlier periods, and we, therefore, do not know the extent to which the system may have helped reduce the number of reports that are submitted late. Even if the new system has produced improvement, however, the overall record is still not in keeping with the Coast Guard’s 60-day requirement. Content and quality: Coast Guard guidance also contains criteria for the information that should be included in an after-action report. These criteria, which are consistent with standards identified in our prior work, include listing each exercise objective and providing an assessment of how well each objective was met. However, 18 percent of the after-action reports we reviewed either did not provide such an objective-by-objective assessment or identified no issues that emerged from the exercise. While the scope of each exercise may contribute to a limited number of issues being raised, our past reviews found that after-action reports need to accurately capture all exercise results and lessons learned; otherwise, agencies may not be benefiting fully from exercises in which they participate. Similarly, officials at the Department of Defense, which like the Coast Guard conducts a variety of exercises as part of its training, said that if their after-action reports lack sufficient fundamental content, they cannot be used effectively to plan exercises and make necessary revisions to programs and protocols. Our review indicated that, in addition to the pressure of other workload demands, two additional factors may be contributing to limitations in report content and quality—current review procedures and a lack of training for planners. Headquarters planning officials noted that local commands have primary responsibility for reviewing after-action reports and that limited criteria exist at headquarters for evaluating the content of reports submitted by these commands. At the field level, many planners with whom we spoke said they were unaware of any written documentation or exercise-planning guidance they could refer to when developing an after-action report. The Coast Guard has cited several planned actions that may allow for improved content and quality in after-action reports. These actions include updating exercise management guidance and promulgating new instructions related to preparing after-action reports and collecting lessons learned. While these initiatives may address issues of content and quality in after-action reports, they are currently still in the development phase. A successful response to a terrorist threat or incident in a seaport environment clearly requires the effective cooperation and coordination of numerous federal, state, local, and private entities—issues that exercises and after-action reports are intended to identify. Complete and timely analyses of these exercises represent an important opportunity to identify and correct barriers to a successful response. The Coast Guard’s inability to consistently report on these exercises in a timely and complete manner represents a lost opportunity to share potentially valuable information across the organization. The Coast Guard’s existing requirements, which include submitting these reports within 60 days and assessing how well each objective has been met, appear reasonable but are not being consistently met. Coast Guard officials cited a new management system as their main effort to making reports more timely, but this system has been in place for more than a year, and timeliness remains a problem. It is important for Coast Guard officials to examine this situation to determine if more needs to be done to meet the standard. The Coast Guard has several other steps under development to address issues of report content and completeness, and it is too early to assess the effect these actions will have. For this set of actions, it will be important for the Coast Guard to monitor the situation to help ensure that exercises can achieve their full and stated purpose. To help ensure that reports on terrorism-related exercises are submitted in a timely manner that complies with all Coast Guard requirements, we are making one recommendation, that the Commandant of the Coast Guard review the Coast Guard’s actions for ensuring timeliness and determine if further actions are needed. We provided DHS, DOJ, and DOD with a draft of this report for review and comment. The Coast Guard generally concurred with our findings and recommendation and did not provide any formal comments for inclusion in the final report. DOJ and DOD also did not have any official comments. DOD provided two technical clarifications, which we have incorporated to ensure the accuracy of our report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Commandant of the Coast Guard, appropriate congressional committees, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (415) 904-2200 or by email at wrightsonm@gao.gov, or Steve Caldwell, Assistant Director, at (202) 512-9610 or by email at caldwells@gao.gov, or Steve Calvo, Assistant Director, at (206) 287-4839 or by email at calvos@gao.gov. Other key contributors to this report were Christine Davis, Wesley Dunn, Michele Fejfar, Lynn Gibson, Dawn Hoff, David Hudson, Dan Klabunde, Ryan Lambert, and Stan Stenersen. Guidance and experience stress producing AARs that fully assess training objectives and document deficiencies. Coast Guard guidance: calls for exercises to be designed to expose weaknesses in plans and procedures and highlight resource and training deficiencies. Minimum requirements for AARs include documentation of each supporting objective and an assessment of how well each objective was met. Past GAO work: when AARs do not accurately capture exercise results and lessons learned, agencies may not be benefiting fully from exercises in which they participate. DOD perspective: DOD officials said AARs that did not provide fundamental content cannot be used effectively to plan exercises and make necessary revisions to programs and protocols. They also noted that new operational missions may require an additional emphasis on exercise planning and after-action reporting. Assessment of exercises may not be sufficient: 18 percent of AARs we reviewed identified no issues or did not provide adequate assessment of training objectives. Review procedures and training for planners may be insufficient in this area. Headquarters planning officials noted that the primary review of all AARs resides solely at the local command level. Although all submitted AARs are reviewed “for general approval” by headquarters officials, they said that this review uses limited criteria (grounds for rejection include use of inappropriate language or participants' names). Many Coast Guard field personnel we interviewed said they were unaware of any written documentation or exercise planning guidance they could refer to when developing an AAR. Some efforts to address timeliness are under way, but effects to date are limited. Coast Guard officials said the Contingency Preparedness System (CPS), the program for managing exercises and AARs, has allowed for a renewed emphasis on report timeliness. Headquarters planning staff currently use this system to notify each area of overdue AARs. However, CPS has been in place since August 2003, and timeliness remains a concern. Officials have also discussed the possibility of reducing the AAR submission deadline (to as few as 15 days), but efforts are still ongoing due to “pushback from the field.” They also said that the formal Coast Guard training courses emphasize that AAR development be incorporated into the planning process and exercise timeline. Senior exercise management officials said they are also updating an instruction related to collecting AARs and lessons learned. They expect it to be promulgated to the field in 1-6 months. Officials noted the following efforts to improve content and quality of AARs. Formal training courses that encourage documenting exercise information quickly to capture relevant information and lessons learned before recall is diminished or competing priorities take over. Updated instruction on AARs and lessons learned collection (currently in development). Increased functionality of CPS has been proposed, which may offer additional incentives for planners to utilize the system. Key elements of the national response framework are evolving. Release of National Incident Management System and draft National Response Plan. Transitional period for agencies to revise their plans once the final NRP is released, agencies will have up to 180 days to revise their plans to align with the NRP. Few legal issues surfaced in port exercises or after-action reports. None of these issues were statutory problems according to exercise participants and agency officials. Exercises and after-action reports identified operational issues to varying degrees. Key issues included: communication, incident command, and resource coordination concerns. Many after-action reports are not submitted timely, and content and quality of some does not meet Actions taken by the Coast Guard to address these problems have had limited effect thus far. The objectives of this report were to (1) describe the emerging framework under which the federal government coordinates with state and local entities to address a terrorist incident in a U.S. port; (2) identify the issues, if any, regarding federal agencies’ legal authority that have emerged from port security exercises and what statutory actions might address them; (3) describe the types of operational issues being identified through these exercises; and (4) identify any management issues related to Coast Guard- developed after-action reports. To address these objectives, we reviewed relevant legislation, regulations, directives and plans, analyzed agency operational guidance and Coast Guard after-action reports (AARs), interviewed a variety of federal officials, and observed several port security exercises. To identify the emerging framework to address a terrorist incident in a U.S. port, we reviewed relevant statutes such as the Homeland Security Act of 2002 and the Maritime Transportation Security Act of 2002 and implementing maritime regulations at 33 CFR, parts 101 to 106. We also reviewed Homeland Security Presidential Directive/HSPD- 5 and Presidential Decision Directive 39. Operational plans that were included in our analysis included the Initial National Response Plan, the Interagency Domestic Terrorism Concept of Operations plan (CONPLAN), Interim Federal Response Plan, and the National Response Plan “Final Draft.” We also reviewed agency guidance related to exercise planning and evaluation such as the Coast Guard Exercise Planning Manual and Contingency Preparedness Planning Manual, as well as the Department of Homeland Security/ Office of Disaster Preparedness’ Exercise and Evaluation Program. Findings were supplemented with interviews of key officials in federal agencies, including the Coast Guard (CG), the Department of Homeland Security (DHS), Department of Defense (DOD), Department of Justice (DOJ), and related federal maritime entities such as Project Seahawk. To provide a framework for evaluating agencies’ legal authority in responding to a terrorist incident in a U.S. port, we adopted a case study methodology because it afforded a factual context for the emergence of legal issues that could confront agencies in the exercise of their authority. Our efforts included attending four U.S. port based terrorism exercises (Los Angeles, Calif.; Hampton Roads, Va.; Charleston, S.C.; Philadelphia, Pa.), reviewing CG AARs for fiscal year 2004, and conducting in-person and telephone interviews with DHS, CG, DOJ, DOD, and Project Seahawk. The port exercises we selected to visit were geographically diverse and each was conducted in either August or September of fiscal year 2004. Additional criteria for exercise selection included the strategic importance of the port (as defined by the Maritime Administration), the variety of terrorism scenarios to be exercised, and the federal, state, and local players involved. The AARs we reviewed were based on a list of all fiscal year 2004 exercises provided to us by the CG. We focused on any contingency that included terrorism and then requested AARs for those completed exercises from the CG. According to CG guidance, AARs are required to be submitted within 60 days of exercise completion. To ascertain compliance with this guidance, CG personnel provided us with the dates that AARs for terrorism-related exercises were received at headquarters. We used this information, in conjunction with the exercise start and stop dates, to determine which reports were on time, which were late, and the average time late reports were submitted beyond the 60-day requirement. While issues of a legal nature did surface during our observation of exercises and analysis of AARs, exercise participants and agency officials did not recommend statutory changes for these issues. We generally relied upon the agency’s position as to whether legislation was necessary and did not independently assess the need for legislation by auditing the specific issues identified in the exercises. To identify operational issues that occurred during port terrorism exercises, we relied extensively on perspectives gained through our observations at the four port terrorism exercises as well as a comprehensive review of the available AARs for operational issues based on criteria we developed. In order to determine the frequency of various operational issues identified in the CG’s AARs, we noted instances that each subcategory within the major category appeared. These categories and subcategories were chosen through exercise observation and an initial review of available AARs by two independent analysts. This allowed us to identify operational issues that were consistent across the terrorism exercises. We used the following major categories and subcategories (which appear in parentheses) Communication (communication interoperability issues, communication policy or protocols between or within agencies, information sharing between agencies), Command and Control/ Incident Command Structure (NIMS/ICS training, UC/IC information flow), Unclear Decision Making/ Jurisdictional Knowledge (unclear decision making authority, unclear lead authority, unclear authorities/jurisdictions of other agencies), and Resource Coordination/ Capabilities (response capabilities, response coordination/joint tactics). To analyze the reports, two GAO analysts independently reviewed each report and coded operational issues based on the above subcategories. The results of each analysis were then compared and any discrepancies were resolved. Overall percentages for the major categories were determined based on whether any of the issues were identified under the respective subcategories. The maximum number of observations for any major category was equal to one, regardless of the number of times a subcategory was recorded. To identify management concerns regarding the CG’s AARs, we reviewed our previous studies on this issue as well as CG and DHS issued guidance on exercise management, such as the Coast Guard’s Exercise Planning Manual and Contingency Preparedness Planning Manual Volume III. Our analysis also included in-person interviews with CG exercise management officials from headquarters and CG planners in the field to gain additional information on how terrorism exercises are planned and evaluated as well as how lessons learned are cataloged and disseminated. To ascertain the effect of untimely CG AARs (CG AARs are required to be completed within 60 days of exercise completion), we also interviewed exercise management experts from DOD. We conducted a content analysis of the available AARs to determine the weaknesses in the reports and where deviations from CG protocol were taking place. We conducted our work from June to December 2004 in accordance with generally accepted government auditing standards.
Seaports are a critical vulnerability in the nation's defense against terrorism. They are potential entry points for bombs or other devices smuggled into cargo ships and ports' often-sprawling nature presents many potential targets for attack. To assess the response procedures that would be implemented in an attack or security incident, officials conduct port-specific exercises. Many federal, state, and local agencies may potentially be involved. The Coast Guard has primary responsibility for coordinating these exercises and analyzing the results. GAO examined (1) the emerging framework for coordinating entities involved in security responses, (2) legal and operational issues emerging from exercises conducted to date, and (3) Coast Guard management of reports analyzing exercises. GAO reviewed reports on 82 exercises from fiscal year 2004 and observed 4 exercises as they were being conducted. The framework under which federal agencies would manage a port-terrorism incident is still evolving. The primary guidance for response, the National Response Plan, is in the final stages of approval, and the National Incident Management System, the structure for multiagency coordination, is still being put in place. As a result, it is too early to determine how well the complete framework will function in an actual incident. GAO's review of fiscal year 2004 terrorism-related reports and exercises identified relatively few legal issues, and none of these issues produced recommendations for statutory changes. Most issues have instead been operational in nature and have surfaced in nearly every exercise. They are of four main types: difficulties in sharing or accessing information, inadequate coordination of resources, difficulties in coordinating effectively in a command and control environment, and lack of knowledge about who has jurisdictional or decision-making authority. Reports on the exercises often do not meet the Coast Guard's standards for timeliness or completeness. Sixty-one percent of the reports were not submitted within 60 days of completing the exercise--the Coast Guard standard. The Coast Guard has implemented a new system for tracking the reports, but after a year of use, timeliness remains a concern. The Coast Guard has requirements for what the reports should contain, but 18 percent of the reports did not meet the requirement to assess each objective of the exercise. The Coast Guard has cited several planned actions that may allow for improving completeness. These actions are still in development, and it is too early to determine how much they will help.
A paid preparer is simply anyone who is paid to prepare, assist in preparing, or review a taxpayer’s tax return. In this statement, we refer to two categories of paid preparers—tax practitioners and unenrolled preparers. CPAs, attorneys, and enrolled agents are tax practitioners. Tax practitioners differ from unenrolled preparers in that they can practice before IRS, which includes the right to represent a taxpayer before IRS, prepare and file documents with IRS for the taxpayer, and correspond and communicate with IRS. We use the term unenrolled preparer to describe the remainder of the paid preparer population. In most states, anyone can be an unenrolled preparer regardless of education, experience, or other standards. Tax practitioners are subject to standards of practice under the Department of Treasury Circular No. 230. Enrolled agents are generally required to pass a three-part examination and complete annual continuing education, while attorneys and CPAs are licensed by states but are still subject to Circular 230 standards of practice if they practice before IRS. Generally, unenrolled preparers are not subject to these requirements. In April 2006, we made a recommendation to IRS to conduct research on the extent to which paid preparers meet their responsibility to file accurate and complete tax returns. conducted a study of the quality of paid preparers and issued a report recommending increased oversight of paid preparers. Recommendations included (1) mandatory registration, (2) competency testing and continuing education, and (3) holding all paid preparers— including unenrolled preparers—to Circular 230 standards of practice. IRS implemented each recommendation through regulations issued in September 2010 and June 2011. The June 2011 regulations amended Circular 230 and established a new class of practitioners called “registered tax return preparers.” IRS intended for these new requirements to support tax professionals, increase confidence in the tax system, and increase taxpayer compliance. GAO-06-563T. According to IRS officials, approximately 84,148 competency exams were taken prior to the District Court’s decision. new testing and continuing professional education requirements. IRS appealed the order, but it was affirmed in February 2014 by the U.S. Court of Appeals for the District of Columbia Circuit. Figure 1 provides a summary timeline of IRS’s implementation of paid preparer requirements and legal proceedings. The President’s Fiscal Year 2015 budget, released in March 2014, included a proposal to explicitly provide the Secretary of the Treasury and IRS with the authority to regulate all paid preparers. Although the District Court determined that IRS does not have the authority to regulate unenrolled preparers, the decision did not affect the requirement that all paid preparers obtain a Preparer Tax Identification Number (PTIN) and renew their PTIN annually. As of March 16, 2014, approximately 676,000 paid preparers have registered or renewed their PTINs. As shown in figure 2, the two largest categories of PTIN registrations and renewals are unenrolled preparers—55 percent—and CPAs—31 percent. Currently, Oregon, Maryland, California, and New York regulate paid preparers. Both Oregon and California began to regulate paid preparers in the 1970s, while Maryland and New York’s programs were implemented more recently. Further, the programs themselves involve different types of requirements for paid preparers as illustrated in table 1. In August 2008—prior to Maryland and New York implementing paid preparer requirements—we reported on state-level paid preparer requirements in California and Oregon. Specifically, we reported that both California and Oregon have requirements that paid preparers must meet before preparing returns; of the two states, Oregon has more stringent requirements. According to our analysis of IRS tax year 2001 NRP data, Oregon returns were more likely to be accurate while California returns were less likely to be accurate compared to the rest of the country after controlling for other factors likely to affect accuracy. Specifically, in August 2008, we found that the odds that a return filed by an Oregon paid preparer was accurate were 72 percent higher than the odds for a comparable return filed by a paid preparer in the rest of the country. According to IRS’s SOI data, an estimated 81.2 million or 56 percent of approximately 145 million individual tax returns filed for tax year 2011 were completed by a paid preparer. Estimated use of paid preparers was fairly evenly distributed across income levels, and as table 2 shows, taxpayers with more complex returns used preparers the most. For example, preparers were more commonly used by taxpayers who filed the Form 1040 as opposed to the 1040EZ or 1040A and those claiming itemized deductions or the Earned Income Tax Credit (EITC). Across all income levels taxpayers who used paid preparers had a higher median refund than those who prepared their own returns at statistically significant levels, as shown in table 3. Specifically, individual taxpayers who used a paid preparer had an estimated median tax refund across all adjusted gross income levels that was 36 percent greater than taxpayers who prepared their own return. Taxpayers rely on paid preparers to provide them with accurate, complete, and fully compliant tax returns; however, tax returns prepared for us in the course of our investigation often varied widely from what we determined the returns should and should not include, sometimes with significant consequences. Many of the problems we identified would put preparers, taxpayers, or both at risk of IRS enforcement actions. The NRP’s review of tax returns from 2006 through 2009 also found many errors on returns prepared by paid preparers, and some of those errors were more common on paid prepared returns than on self-prepared returns. Nearly all of the returns prepared for our undercover investigators were incorrect to some degree, and several of the preparers gave us incorrect tax advice, particularly when it came to reporting non-Form W-2 income and the EITC. Only 2 of 19 tax returns showed the correct refund amount. While some errors had fairly small tax consequences, others had very large consequences resulting in the overstatement of refunds from $654 to $3,718. Our undercover investigators visited 19 randomly selected tax preparer offices—a non-generalizeable sample—to have taxes prepared. We developed two taxpayer scenarios based on common tax issues that we refer to as our “Waitress Scenario” and our “Mechanic Scenario.” Key characteristics of each scenario are summarized in table 4. Refund amounts derived by the 19 preparers who prepared tax returns based on our two scenarios varied greatly. For our waitress scenario, the correct refund amount was $3,804, however, refund amounts on returns prepared for our undercover investigators ranged from $3,752 to $7,522. Similarly, the correct refund amount for the mechanic scenario was $2,351; however, refunds ranged from $2,351 to $5,632. Paid preparer errors generated during our 19 non-generalizeable visits resulted in refund amounts that varied from giving the taxpayer $52 less to $3,718 more than the correct amount. Of the 19 paid preparers we visited, 2 determined the correct refund amount: one correct tax return was prepared for the waitress scenario and one for the mechanic scenario. An additional 4 paid preparers calculated tax returns within $52 of the correct refund amount. On the remaining 13 tax returns—7 for the waitress scenario and 6 for the mechanic scenario—preparers overestimated the total refund by $100 or more. Figure 3 shows the amount of the refund over and under the correct refund amount. In some instances, paid preparers made similar errors across multiple site visits. For example, on the waitress return paid preparers made two of the same errors: (1) not claiming the unreported cash tips and (2) claiming both children as eligible to receive the EITC. These errors resulted in clusters of overstated refunds. In four site visits, paid preparers not claiming unreported cash tips resulted in a refund amount overstated by $654. In three site visits, paid preparers made both errors, which resulted in a refund amount overstated by $3,718. In the mechanic scenario, paid preparers that did not include side income resulted in tax refunds that ranged from $2,677 to $3,281 above the correct refund amount. A majority of the 19 paid preparers we visited made errors on common tax return issues; on some lines of the tax return most paid preparers were correct. Some of the most significant errors involved paid preparers (1) not reporting non-Form W-2 income, such as unreported cash tips, in 12 of 19 site visits; (2) claiming an ineligible child for the EITC in 3 of 10 site visits; and (3) not asking the required eligibility questions for the American Opportunity Tax Credit. Such errors could lead taxpayers to underpay their taxes and may expose them to IRS enforcement actions. By contrast, in some instances the majority of preparers took the right course of action. For example, 17 of 19 paid preparers completed the correct type of tax return and 18 of 19 preparers correctly determined whether to itemize or claim the standard deduction. Our results are summarized in figure 4. Type of tax return. Paid preparers completed the correct type of tax return—the Form 1040—for 17 of 19 site visits. Two paid preparers incorrectly completed the Form 1040A for the waitress scenario. The Form 1040A should not have been used because the waitress received tip income that was not reported to her employer. Dividend and capital gains income. Preparers recorded the income correctly on 8 of 9 returns. The mechanic received qualified and ordinary dividends, and capital gains from a mutual fund that were reinvested into the fund. This income was documented on a third party reporting form; the Form 1099-DIV. According to IRS guidance, a Form 1099-DIV must be filed for any person who receives dividends of $10 or more, including for funds that are reinvested. Mechanic Scenario, Site Visit #1 One paid preparer who did not accurately record the investment income said that it was not necessary to include income that was reinvested in a mutual fund. Total income. Of the 10 waitress returns prepared for us, 3 included the unreported cash tip income. However, only one of the three returns included the correct amount of tip income. Total income for the waitress scenario should include income documented on the Form W- 2, as well as the amount of unreported cash tip income offered by our investigator to the paid preparer during the site visit. The two returns that did not include the correct amount of tip income included lesser amounts. Waitress Scenario, Site Visit #5 In response to the investigator mentioning her unreported cash tip income, one paid preparer told her that tips not included on the Form W-2 do not need to be reported. Total income for the mechanic return should include non-Form W-2 business income—resulting from mechanic work and babysitting conducted outside of a formal employment arrangement—and income from ordinary dividends and capital gains. Of the 9 mechanic returns prepared for us, 4 returns included both the business income and the investment income. However, only 3 returns included the correct amounts of business and investment income. Incorrectly reporting income often resulted in cascading errors on other lines of the tax return. Tax returns that did not include side income had errors in credits that are calculated based on income. For example, if a paid preparer did not report side income in the mechanic scenario, the resulting total income would make the mechanic eligible for the EITC when he otherwise would not be eligible. Similarly, because two paid preparers incorrectly chose not to include unreported tip income for the waitress, they selected the wrong type of tax return, the Form 1040A. Mechanic Scenario, Site Visits #3 and #9 Two paid preparers demonstrated what the refund amount would be if the side income were reported compared to if it were not reported. Both preparers did not record the side income. Itemized or standard deduction. All but one of the 19 returns correctly recorded the most advantageous deduction for the two scenarios. According to IRS guidance, taxpayers should itemize deductions when the amount of their deductible expenses is greater than the standard deduction amount. For the waitress scenario, the most advantageous deduction would be the standard deduction for head of household, and for the mechanic scenario, the itemized deductions were more advantageous. One paid preparer chose to use the standard deduction for the mechanic, even though it was approximately $3,000 less than the total amount of the itemized deductions we included in the scenario. Child-care expenses. All 19 paid preparers did not record child-care expenses because neither the waitress nor mechanic was eligible to receive the credit. While none of the paid preparers recorded the credit, the reasons the preparers cited were often incorrect. According to IRS guidance, a taxpayer must attempt to collect the Social Security number of his or her child-care provider, but if unsuccessful, can report that fact to IRS and still claim the credit. For the waitress scenario, the reason that she was ineligible to claim the child-care expenses was that she did not attempt to get her child-care provider’s Social Security number. Upon learning that she did not have the Social Security number of the provider, several of the paid preparers did not enter her child care expenses on her return. IRS guidance states that qualified child-care expenses only include amounts paid while the taxpayer worked or looked for work. The mechanic and his wife were not eligible for the credit because the child-care expenses were incurred for running errands, and not so that either parent could work. Again, many tax preparers said that the reason the credit could not be claimed was because the mechanic did not have the child-care provider’s Social Security number, not because he was otherwise ineligible. Student loan interest. Eight of 10 paid preparers correctly included the deduction for student loan interest. The waitress’s Form 1098-E shows the interest the lender received from the taxpayer on qualified student loans. A taxpayer receives a Form 1098-E if student loan interest of $600 or more is paid during the year. Sales tax deduction. Seven of 9 preparers recorded sales tax as a deduction on the mechanic’s tax return, however not all chose the most advantageous amount. According to IRS guidance, taxpayers who itemize deductions can choose whether to deduct local income taxes or sales taxes. Because the mechanic lived in a state that did not have income tax, sales tax should have been deducted. Of the 7 paid preparers that deducted sales taxes, only 2 recorded the amount that was most advantageous to the taxpayer. IRS provides an online calculator to help taxpayers estimate the amount of sales taxes they likely paid in a year. To determine this estimate, taxpayers input basic information such as ZIP code and annual income in the calculator. Five preparers chose amounts that were lower than the amount the calculator estimated. Social Security and Medicare tax on unreported tips. Two of 10 paid preparers completed the Form 4137 and reported the amount of taxes owed on the tip income. Because the waitress received unreported cash tips, the amount of taxes owed on the unreported cash tip income should be calculated using the Form 4137. However, one of the preparers included a lesser amount of tip income when performing the calculation, resulting in a smaller amount of taxes owed. Another preparer reported the tip income by incorrectly completing a Schedule C, Profit or Loss from Business, and a Schedule SE for self-employment taxes. Earned Income Tax Credit. The EITC on line 64a was another area where paid preparers made mistakes that resulted in a significant overstatement of the refund. Of the 10 returns prepared for the waitress, 3 reported two children on the Schedule EIC, instead of the one child who lived with the taxpayer in 2013 and was eligible for the EITC. Waitress Scenario, Site Visit #4 One paid preparer questioned the investigator on the amount of time her older child lived with her. The investigator responded that the older child stayed with her on weekends. The paid preparer discussed the investigator’s response with the office manager and then stated that she can claim the child for the EITC if no one else does, which was not correct. American Opportunity Tax Credit. All 9 paid preparers correctly chose the American Opportunity Tax Credit for the mechanic scenario. The mechanic had a 20-year-old son attending a community college and paid for both his tuition and books. According to IRS guidance, to be eligible for this credit, a student must meet certain requirements including full-time enrollment at least half the year and no felony drug offense convictions. Although we instructed the investigator to respond to paid preparer inquiries such that his son met these requirements, some paid preparers did not ask the required questions to determine eligibility. All paid preparers are subject to certain requirements in the Internal Revenue Code (IRC) and may be subject to penalties for non- compliance. For example, the IRC imposes monetary penalties on paid preparers who understate a taxpayer’s tax liability due to willful or reckless conduct. As shown in figure 5, in 12 of 19 cases, paid preparers did not record additional side income not reported on Form W-2’s and may be subject to this penalty. The IRC also requires that paid preparers sign the tax return and furnish an identifying number. In 3 of 19 cases, preparers did not meet the signature requirement. In addition, 3 preparers used a PTIN that did not belong to them and one used a fake PTIN. Additionally, 3 of 10 preparers in our study may be subject to a penalty for not meeting due diligence requirements when determining if both of the waitress’s children qualified for the EITC. When considering the EITC, paid preparers must meet four due diligence requirements. Generally, if paid preparers file EITC claims, they must (1) ask all the questions to get the information required on Form 8867, Paid Preparers’ Earned Income Credit Checklist; (2) compute the amount of the credit using the EITC worksheet from the Form 1040 instructions or a similar document; (3) ask additional questions when the information the client gives the preparer seems incorrect, inconsistent, or incomplete; and (4) keep a copy of Form 8867, the EITC worksheets, and other records used to compute the credit. Because the returns we had prepared were not real returns and were not filed, penalties would not apply. However, we plan to refer the matters we encountered to IRS so that any appropriate follow-up actions can be taken. The fees charged for tax preparation services varied widely across the 19 visits, sometimes between offices affiliated with the same chain. Often, paid preparers either did not provide an estimate of the fees upfront or the estimate was less than the actual fees charged. In several instances, upon completion of the tax return, the preparer initially charged one fee, then offered a reduced amount. Figure 6 shows the fees charged by each of the 19 paid preparers we visited for each scenario. For the waitress scenario, the final fees charged for tax preparation ranged from $160 to $408. For the mechanic scenario, the final fees charged for tax preparation ranged from $300 to $587. For the two correct tax returns that were prepared, the final fee charged was $260 for the waitress scenario and $311 for the mechanic scenario. Some paid preparers provided receipts that listed total charges that were higher than the “discounted” amount ultimately charged. For example, one preparer estimated the cost of services to be $794, but then charged the taxpayer $300. Paid preparers provided various reasons for the amount of the tax preparation fee, including, (1) the EITC form is the most expensive form to file, (2) the pricing and fees are at their peak from mid-January through February and then go down, and (3) there is a price difference depending if the tax return is completed in the morning or the evening. As in our limited investigation, our estimates from NRP data suggest that tax returns prepared by paid preparers contained a significant number of errors. As shown in table 5, returns prepared by a paid preparer showed a higher estimated error rate—60 percent—than returns prepared by the taxpayer—50 percent. Errors in this context changed either the tax due or the amount to be refunded. As noted before, it is important to remember that paid preparers are used more often on more complicated returns than on simpler ones, although we were unable to gauge the full extent to which this might be true. Furthermore, errors on a return prepared by a paid preparer do not necessarily mean the errors were the preparer’s fault; the taxpayer may be to blame. Preparers depend upon the information provided by the taxpayer. In addition to different rates of errors on paid preparer filed returns and self-prepared returns, the amount taxpayers owed IRS also differed. Specifically, the estimated median amount owed to IRS was higher for paid preparer filed returns. For instance, as shown in table 6, it is estimated that taxpayers using a paid preparer owed a median of $354 to IRS, compared with $169 for taxpayers preparing their own return. NRP estimates show that both individuals and paid preparers make errors on specific forms and lines of Form 1040, some of which we experienced in our undercover visits. Table 7 shows that in many instances, returns completed by a paid preparer are estimated to have a greater percentage of errors compared to self-prepared returns. For example, of returns prepared by a paid preparer, 51 percent have an error on the EITC line compared to 44 percent of self-prepared tax returns. In total, for five line items we analyzed, the difference in the percent of errors on returns prepared by a paid preparer was statistically greater than the percent of errors on self-prepared returns. These line items include (1) the itemized or standard deduction, (2) business income, (3) total income, (4) the EITC, and (5) the refund amount. Differences between the percent of returns with errors on the student loan interest deduction line, the unreported Social Security and Medicare tax on tips line, and the education credit line were not statistically significant when comparing returns done by a paid preparer to those that were self-prepared. Over half of all taxpayers rely on the expertise of a paid preparer to provide advice and help them meet their tax obligations. IRS regards paid preparers as a critical link between taxpayers and the government. Consequently, paid preparers are in a position to have a significant impact on the federal government’s ability to collect revenue and minimize the estimated $385 billion tax gap. As of March 2014, 55 percent of paid tax preparers are unenrolled preparers, not regulated by IRS. Undoubtedly, many paid preparers do their best to provide their clients with tax returns that are both fully compliant with the tax law and cause them to neither overpay nor underpay their federal income taxes. However, IRS data, which more broadly track compliance, show preparers made serious errors, similar to the findings from our site visits. The higher level of accuracy of Oregon’s tax returns compared to the rest of the country suggests that a robust regulatory regime involving paid preparer registration, qualifying education, testing, and continuing education may help facilitate improved tax compliance. The courts determined that IRS does not have sufficient authority to regulate unenrolled preparers. In March 2014, the administration proposed that the Treasury and IRS be granted the explicit authority to regulate all paid preparers. Providing IRS with the necessary authority for increased oversight of the paid preparer community will help promote high-quality services from paid preparers, will improve voluntary compliance, and will foster taxpayer confidence in the fairness of the tax system. If Congress agrees that significant paid preparer errors exist, it should consider legislation granting IRS the authority to regulate paid tax preparers. Chairman Wyden, Ranking Member Hatch, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact James R. McTigue, Jr. at (202) 512-9110 (mctiguej@gao.gov). Individuals making key contributions to this testimony include: Wayne A. McElrath, Director; Libby Mixon, Assistant Director; Gary Bianchi, Assistant Director; Amy Bowser; Sara Daleski; Mary Diop; Rob Graves; Barbara Lewis; Steven Putansu; Ramon Rodriguez; Erinn L. Sauer; and Julie L. Spetz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For tax year 2011, an estimated 56 percent of about 145 million individual tax returns were completed by a paid preparer. IRS has long recognized that preparers' actions have an enormous effect on its ability to administer tax laws effectively and collect revenue that funds the government. Likewise, many taxpayers rely on preparers to provide them with accurate, complete, and fully compliant tax returns. GAO was asked to review the oversight and quality of paid preparers. This testimony examines (1) how preparers are regulated by IRS and (2) the characteristics of tax returns completed by preparers based on products GAO issued from April 2006 through August 2008 and work conducted from November 2013 to April 2014. GAO reviewed laws, regulations and other guidance and interviewed IRS officials. GAO analyzed IRS Statistics of Income data from tax year 2011, the most recent data available, and the NRP database, which broadly tracks compliance. To gain insight on the quality of service provided, GAO conducted 19 undercover site visits to commercial preparers in a metropolitan area. Criteria to select the metropolitan area included whether the state regulates preparers and levies an income tax. The Internal Revenue Service's (IRS) authority to regulate the practice of representatives before IRS is limited to certain preparers, such as attorneys and certified public accountants. Unenrolled preparers—those generally not subject to IRS regulation—accounted for 55 percent of all preparers as of March 2014. In 2010, IRS initiated steps to regulate unenrolled preparers through testing and education requirements; however, the courts ruled that IRS lacked the authority. GAO found significant preparer errors during undercover site visits to 19 randomly selected preparers—a sample which cannot be generalized. Refund errors in the site visits varied from giving the taxpayer $52 less to $3,718 more than the correct refund amount. Only 2 of 19 preparers calculated the correct refund amount. The quality and accuracy of tax preparation varied. Seventeen of 19 preparers completed the correct type of tax return. However, common errors included not reporting non-Form W-2 income (e.g., cash tips) in 12 of 19 site visits; claiming an ineligible child for the Earned Income Tax Credit in 3 of 10 site visits where applicable; not asking the required eligibility questions for the American Opportunity Tax Credit; and not providing an accurate preparer tax identification number These findings are consistent with the results of GAO's analysis of IRS's National Research Program (NRP) database. GAO analysis of NRP data from tax years 2006 through 2009 showed that both individuals and preparers make errors on tax returns. Errors are estimated based on a sample of returns, which IRS audits to identify misreporting on tax returns. Tax returns prepared by preparers had a higher estimated percent of errors—60 percent—than self-prepared returns—50 percent. Errors refer to changes either to the tax due or refund amount. If Congress agrees that significant preparer errors exist, it should consider legislation granting IRS the authority to regulate paid tax preparers. Technical comments from IRS were incorporated into this report.
Congress created the EIC to offset the impact of Social Security taxes and to encourage low-income workers to seek employment rather than welfare. Taxpayers earning income below a certain level may claim the credit. The amount of the EIC increases with increasing income, plateaus at a certain level of earnings, and then decreases until it reaches zero when earned income exceeds the maximum earning level allowed for the credit. Taxpayers with children can claim the EIC if they (1) have at least one EIC qualifying child, (2) meet income tests, (3) file with any filing status except “married filing separately,” and (4) were not a nonresident alien for any part of the year. To claim the EIC without a qualifying child, taxpayers must meet requirements 2, 3, and 4; be at least 25 years old but less than 65 at the end of the year; have lived in the United States for more than half the year; and not be claimed as a dependent on another return. Although the EIC has been credited with reducing welfare participation and lifting millions of low-income earners out of poverty, it has also been susceptible to error and abuse. In a February 28, 2002, report on its study of tax year 1999 EIC claims, IRS said that of the estimated $31.3 billion in EIC claims for that tax year, between $9.7 billion and $11.1 billion (30.9 to 35.5 percent) was overclaimed. Of the overclaims, the largest amount (about $2.3 billion) was caused by taxpayers claiming children who did not meet the qualifying child criteria. Most often, according to IRS, these errors were due to taxpayers claiming children who did not meet the residency requirements. EIC eligibility, particularly related to qualifying children, is difficult for IRS to verify through its traditional enforcement procedures, such as matching return data to third-party information reports. Correctly determining whether a child claimed by the taxpayer for EIC purposes meets the qualifying tests requires IRS to have detailed knowledge of the taxpayer’s household composition and living arrangements. However, IRS does not have the necessary resources to visit taxpayers’ homes and conduct the kind of interviews that would help it obtain that kind of detailed knowledge, and there is no certainty that the cost of such an effort would be worth the results. Thus, IRS must rely on its ability to clearly communicate to taxpayers what information is needed to certify them for the EIC and on taxpayers’ ability to produce documentation to substantiate their qualification for the EIC. IRS began implementing the recertification process in 1998, when, through audits, it disallowed in whole or in part, the EIC claims on about 312,000 tax year 1997 returns and placed recertification indicators on its computerized accounts for those taxpayers. The indicators, which, in effect, tell IRS’s computers not to allow payment of any EIC claim to the taxpayers, are to remain until the taxpayers successfully recertify. To begin the recertification process, taxpayers are to attach a Form 8862 (Information To Claim Earned Income Credit After Disallowance) to the next tax return they file that includes an EIC claim. If a taxpayer claims the EIC without attaching Form 8862, IRS is authorized to disallow the credit, process the return without considering the EIC claim, and inform the taxpayer why it denied the claim. Upon receipt of Form 8862, IRS procedures call for freezing the entire refund claimed on the return (not just the portion related to the EIC) and determining whether the return should be selected for audit. IRS examiners are to select the return for audit unless the taxpayer is no longer claiming the EIC child(ren) previously disallowed and is not claiming a new EIC child. Once the return has been selected for audit, the recertification process, with some minor differences, essentially follows IRS’s normal procedures for correspondence audits. These procedures generally involve examiners (1) asking taxpayers to provide support, (2) reviewing any support provided, and (3) advising taxpayers of the audit results. Since the EIC recertification program’s implementation, IRS has, among other things, expanded the information on recertification available to taxpayers, revised some of the correspondence it sends to taxpayers, and improved examiner training. Many of these changes were in response to recommendations resulting from prior reviews by us and the Treasury Inspector General for Tax Administration (TIGTA). (See app. I for a detailed discussion of the changes in the EIC Recertification Program since 1999, and see app. II for information on prior recommendations by us and TIGTA and IRS’s corrective actions.) There have also been some significant program developments since 1998. Most relevant to this report, (1) the definitions of qualifying child and eligible foster child for EIC purposes have changed and (2) starting with tax returns filed in 2001, IRS, as authorized by TRA97, began imposing a 2-year ban on credits to taxpayers who it determines negligently claimed the EIC through reckless or intentional disregard of the regulations. These developments are discussed in more detail later in the report. To determine whether IRS’s communications with taxpayers about recertification meet the needs of IRS and taxpayers, we analyzed IRS’s forms and correspondence related to recertification, interviewed a representative sample of IRS examiners (as described in the next paragraph) about certain forms, and reviewed the results of related work done by TIGTA. To determine whether information taxpayers are told to provide to prove their entitlement to the EIC is reasonably easy to obtain and consistent with what examiners accept, we did the following: We surveyed, via telephone, a random sample of 90 tax examiners from a list of 323 tax examiners, which, according to IRS, represented the population of examiners in its 10 processing centers who were working on recertification cases as of April 2001. The purpose of our survey was to determine how examiners evaluated evidential support from taxpayers and to help identify aspects of the EIC eligibility criteria that taxpayers had the most difficulty documenting. More details on our survey methods, as well as the confidence intervals of the estimates for all examiners that we made from our sample are provided in appendix III. We talked with representatives from 10 LITCs about any problems taxpayers have in understanding IRS correspondence related to recertification and in complying with IRS’s documentation requirements. We obtained a list from IRS’s Taxpayer Advocate’s Office of the 102 LITCs that were operating in 2001. From that list, we randomly selected 20 LITCs. After eliminating those LITCs that either chose not to participate or said that they did no EIC recertification work, we talked with representatives of 10 LITCs. Given our relatively small sample size and the relatively small proportion of the sample from which we were able to get useful information, we have no assurance that the results from this sample can be reliably generalized to all 102 LITCs. However, our sample does provide the views of about one-tenth of the listed 102 LITCs. To determine whether IRS’s treatment of similarly situated taxpayers is consistent, we analyzed IRS guidance and criteria related to the EIC and recertification; developed five scenarios involving various kinds of documentation that taxpayers might provide IRS in an attempt to prove their eligibility for the EIC; and held structured interviews with 21 examiners to determine, among other things, how they interpreted IRS’s recertification guidance and how they assessed the documentation in our five scenarios. We obtained the documents for our scenarios from EIC recertification cases that we had reviewed, and we deleted taxpayer- identifiable information, such as Social Security numbers, from the documents before giving them to the examiners. We subjectively selected the 21 examiners, on the basis of their availability to meet with us, from the 187 EIC recertification examiners at 4 of IRS’s 10 processing centers (Atlanta, Brookhaven, Kansas City, and Memphis). As such, the results of this analysis cannot be generalized beyond the 21 examiners. We also reviewed IRS’s plans for developing and implementing a decision support tool to be used by examiners working EIC cases, including those involving recertification. We performed our work between February 2000 and January 2002 in accordance with generally accepted government auditing standards. Although IRS has revised some of the correspondence it sends taxpayers as part of the recertification process, two standard forms that are an integral part of the process can lead to unnecessary taxpayer burden because they (1) are of questionable value to the recertification process and/or (2) provide the taxpayer with inadequate or confusing information. The forms are Form 8862 and Form 886-R (Supporting Documents). Copies of the two forms are in appendix IV. Taxpayer confusion can have even more critical implications now that IRS has begun imposing a 2-year EIC ban on credits to taxpayers who it determines have negligently claimed the EIC through reckless or intentional disregard of the regulations. Accurately determining whether a taxpayer’s erroneous claim is due to a simple mistake versus reckless or intentional disregard of the regulations can be complicated when the requirements for claiming the EIC are confusing. Taxpayers begin the recertification process by filing Form 8862 with their tax return. In a 1999 report, we raised concerns about the usefulness of Form 8862 and its potential for misleading or confusing taxpayers. We recommended that IRS stop using the form if it is not needed for recertification purposes. IRS did not eliminate the form because it said it relies on the form to “identify the type of action to be taken for taxpayers required to recertify.” In that regard, IRS does use Form 8862 to decide whether or not to initiate the recertification process. If a taxpayer files a return claiming the EIC and does not attach a Form 8862, IRS is authorized to disallow the credit without going through the recertification process and inform the taxpayer that the disallowance is due to the failure to attach Form 8862. If a taxpayer submits Form 8862, according to IRS’s recertification guidelines, the taxpayer’s return is to be forwarded for audit if the taxpayer is still claiming the previously disallowed EIC child or is claiming a new EIC child. However, Form 8862 does not assist in this determination, because the names and Social Security numbers of the taxpayer’s children that IRS needs to match against the prior year’s tax return do not appear on the form. On the basis of our telephone survey of IRS examiners, we estimate that 86 percent of all examiners working in the recertification program do not find Form 8862 useful. A few examiners pointed out that Form 8862 is generally not part of the case file they receive when they begin recertification. Even when Form 8862 is in the case file, some examiners said that they do not use it because there are no supporting documents submitted with the form. Although the great majority of examiners do not find Form 8862 useful, IRS estimates that taxpayers need an average of 2 hours and 44 minutes to complete and file the form. In that regard, of the 10 LITC representatives we talked with, 7 said that Form 8862 is not easy for most of their clients to understand. Thirteen of the examiners we surveyed did say that Form 8862 had some value. Some pointed out that the form gave them some initial information about the taxpayer before seeking additional information. Others said that the form would alert taxpayers to the kind of documentation they should expect to provide during the recertification process. However, taxpayers would have to deduce the type of information needed because neither Form 8862 nor its instructions specifically tell taxpayers what, if any, documentation they may be asked to send IRS. On the basis of our telephone survey, we determined that an estimated 16 percent of examiners believe that Form 8862 misleads taxpayers into thinking that IRS’s final decision on their eligibility will be based on information in the form. Such a misconception seems understandable given the amount of information taxpayers are asked to provide on the Form 8862. Form 8862 is a 2-page form that requires taxpayers who are claiming the EIC with qualifying children to answer numerous questions and report information on such things as (1) the name of the school the child attended or the day care provider, (2) addresses where the child lived during the year, (3) the name and social security number of any other person the child may have lived with for more than half a year, and (4) the child’s health care provider or social worker if the child was disabled and older than 18. Form 886-R is the vehicle IRS examiners use to tell taxpayers what information they need to provide to prove their eligibility for the EIC as well as to gather information on two other tax issues—whether the taxpayer can also claim dependents and whether the taxpayer qualifies as a head of household. That form is confusing and incomplete. Of the 10 LITC representatives we interviewed, 8 did not believe that IRS adequately explained to taxpayers how EIC recertification is achieved and what documentation is needed to achieve recertification. We believe that Form 886-R contributes to that confusion. The format of Form 886-R could easily confuse taxpayers. For example, in addition to listing documents and information needed to prove eligibility for the EIC, the form lists documents and information needed to prove eligibility for dependent exemptions and the head of household filing status. Requesting documentary evidence to support a dependency claim and head of household filing status could confuse or mislead taxpayers about the requirements they need to meet for EIC recertification. To claim a person as a dependent, for example, a taxpayer must generally prove, among other things, that he or she provided more than one-half of the person’s total support during the calendar year. Therefore, the evidence IRS asks taxpayers to submit to prove that a child is their dependent includes documentation relating to financial support. However, the law does not require that taxpayers meet a financial support test to claim the EIC, and, thus, taxpayers can qualify for the EIC even if they cannot meet the financial support requirement for the dependency exemption. Form 886-R does not make clear that persons can still qualify for the EIC even if they cannot prove that their child qualified as a dependent, and there are no instructions sent to taxpayers along with the Form 886-R that provide that clarification. Thus, persons might incorrectly assume that because they cannot substantiate a child as a dependent, they do not qualify for the EIC. Taxpayers might also be confused by the references in Form 886-R to school records. The form tells taxpayers that one acceptable form of proof that a child lived with them is a school record or transcript containing, among other things, “dates of attendance for the entire tax year.” Since a tax year generally runs from January to December of the same year and a school year typically runs from September of one year to May or June of the next, some taxpayers may not easily discern that they need to obtain school records for 2 school years in order to provide adequate documentation for 1 tax year. In that regard, an IRS taxpayer advocate and an IRS lead examiner in one field office both told us that school year versus tax year is a difficult concept for taxpayers to understand, and examiners we interviewed said that school records submitted by taxpayers often relate to a school year rather than a tax year. The lack of more specific guidance on Form 886-R about the need for 2 years of school data increases the risk that a taxpayer will submit incorrect information, which, at a minimum, could (1) cause extra work for the examiner, (2) cause the taxpayer to contact the school again, and (3) delay a final decision on the taxpayer’s eligibility for the EIC. With a trend toward more nontraditional family units and recent changes in the definitions of qualifying child and foster child for EIC purposes, taxpayers must clearly understand what evidence IRS requires to substantiate the EIC relationship requirement. Form 886-R does not satisfy that need. In listing the documentation needed to prove eligibility for the EIC, Form 886-R includes (1) the child’s birth certificate and (2) the name, address, and Social Security number of the child’s mother and father (if other than the taxpayer and the taxpayer’s spouse). That documentation would be insufficient, however, to prove, for example, that a person is the taxpayer’s adopted child, grandchild, stepchild, or foster child—all of whom meet the definition of an EIC qualifying child. For example, as described by one examiner, a grandmother raising a grandchild with a different last name would have to prove her relationship to the child’s parents. Some examiners we interviewed said that they would accept various official documents that established the relationship requirement between a nonparental taxpayer and the EIC-qualifying child. The official documents they mentioned included birth certificates of the various parties, an adoption paper, some social program’s paperwork that states the relationship between child and taxpayer, or some insurance or medical record that states the relationship. None of these documents is mentioned on the Form 886-R. Although an examiner may eventually obtain the necessary documentation through follow-up correspondence with the taxpayer, the need for additional correspondence leads to extra work for examiners and taxpayers and can lengthen the time needed to close the audit and pay the EIC, if the taxpayer is found eligible. Census Bureau statistics provide an indication of the prevalence of nontraditional family units. According to 1997 Census Bureau statistics, there were 3.9 million children living in homes maintained by their grandparents. Of this number, 1.27 million lived with their grandparents without the presence of either parent, 1.77 million had only a mother present, 0.57 million had both parents present, and 0.28 million had only a father present. According to Census Bureau statistics, the greatest growth between 1992 and 1997 occurred among grandchildren living with grandparents with no parent present. The Census Bureau attributed the increase in grandchildren in these “skipped generation” living arrangements to the growth in drug use among parents, teen pregnancy, divorce, the rapid rise of single-parent households, mental and physical illness, AIDS, crime, child abuse and neglect, and the incarceration of parents. In addition to children living with grandparents without the presence of either parent, the Census Bureau found, as of Fall 1996, that 688,000 children without parents were living with other relatives and 622,000 children without parents were living with nonrelatives. Recent changes in the definitions of qualifying child and foster child for EIC purposes further highlight the need for IRS to make clear what evidence it requires to substantiate the EIC relationship requirement. To qualify as a taxpayer’s qualifying child in tax year 1999, a person had to be the taxpayer’s son, daughter, adopted child, grandchild, stepchild, or foster child, with a foster child defined as any child that (1) the taxpayer cared for as if it were the taxpayer’s own child and (2) lived with the taxpayer for the whole year, except for temporary absences. Those definitions were revised first by the Ticket to Work and Work Incentives Improvement Act of 1999 (P.L. 106-170) and then by the Economic Growth and Tax Relief Act of 2001 (P.L. 107-16). As a net result of those two laws, the current definition of a qualifying child is (1) a son, daughter, stepson, or stepdaughter, or a descendant of any such individual; (2) a brother, sister, stepbrother, or stepsister, or a descendant of any such individual, who the taxpayer cares for as the taxpayer’s own child; or (3) an eligible foster child of the taxpayer. An eligible foster child is now defined as an individual who is placed with the taxpayer by an authorized placement agency and cared for as the taxpayer’s own child. Also, a child who is legally adopted or is placed with the taxpayer by an authorized placement agency for adoption is considered the taxpayer’s child by blood for purposes of the EIC relationship test. With these definitional changes, for example, a taxpayer claiming a nephew as an EIC-qualifying child would have to provide documentation to prove that the child is a descendant of the taxpayer’s sibling. Before the definitional changes, the taxpayer would not have had to prove a blood relationship to the child but only that the taxpayer cared for the child as if it were the taxpayer’s own child. TRA97 authorizes IRS to ban a taxpayer from receiving the EIC for 2 years if it determines that the taxpayer negligently claimed the EIC through reckless or intentional disregard of the regulations. In addition to being banned for 2 years from receiving the EIC, taxpayers may be penalized an amount equal to 20 percent of their tax liability underpayment. IRS began imposing the 2-year ban starting with tax year 1999 returns (i.e., returns filed in 2000). During calendar year 2000, IRS imposed the ban on 7,608 taxpayers. IRS imposed another 14,432 bans during calendar year 2001. “The taxpayer’s EIC in a prior year was disallowed by audit because the taxpayer could not demonstrate the child was the taxpayer’s qualifying child. The taxpayer files a subsequent return claiming EIC and again cannot demonstrate that the child was the taxpayer’s qualifying child. You can consider that the taxpayer intentionally disregarded the EIC rules and regulations and impose the two-year ban.” No doubt some taxpayers seeking recertification are intentionally disregarding the EIC rules and regulations. However, accurately differentiating between negligence and simple error can be hampered when taxpayers are faced with evidentiary requirements that are difficult to understand and/or comply with. Providing documentation to show that a child lived with the taxpayer has consistently been identified as the toughest EIC eligibility requirement to substantiate. This is true for EIC claimants in general, not just those who have to recertify. With respect to the Recertification Program, 80 percent of examiners said that when a taxpayer failed to be recertified, most or all of the time the taxpayer’s inability to substantiate that a child lived with the taxpayer led to the failure. As noted in the following excerpt from Form 886-R, IRS provides taxpayers with several examples of acceptable documents to establish a child’s living arrangement. The quoted excerpt clearly indicates that taxpayers only need to submit one of the three types of documentation listed (school, child care, or medical). “School records or transcripts or an administrative statement from a school official on school letterhead containing the child’s name, address, and dates of attendance for the entire tax year, and the name and address of the child’s parent or guardian, or A statement on company letterhead or a notarized statement from a child care provider containing the child’s name, address, and dates of care for the entire tax year, the name and address of the child’s parent or guardian, and the name and taxpayer identification number of the child care provider, or Medical records or an administrative statement from a health care provider containing the child’s name, address, and dates of medical care during the tax year, and the name and address of the child’s parent or guardian.” Our interviews with LITC representatives and IRS examiners indicated that each of these three types of documentation could pose problems for EIC claimants. In discussing EIC recertification with LITC representatives, we heard of various circumstances facing low-income taxpayers that complicate the ability of these taxpayers to obtain documents that might not seem so difficult for other taxpayers. Our interviews with IRS examiners also indicated that the evidentiary requirements related to child care are not consistent with what most examiners consider acceptable. In order for school records to be accepted, they must include an address for the child and an address for the taxpayer and, as discussed earlier, must be for 2 school years in order to cover the tax year in question. According to some IRS examiners we interviewed, the school records submitted by taxpayers often do not have both the child’s and the taxpayer’s addresses and often relate to a school year rather than a tax year. Earlier in this report, we discussed the problems taxpayers might encounter in distinguishing between a school year and a tax year. Another potential problem related to school records was raised by IRS’s National Taxpayer Advocate in a December 31, 2001, report to the Congress. In the report, the Advocate noted that examiners sometimes disallow the EIC because school records submitted by taxpayers reflect the addresses of the taxpayers’ relatives or friends. As explained by the Advocate, parents may provide school authorities with a relative’s or friend’s address, instead of their own, “in order for their child to attend a particular school for purposes of busing and facilitating before-school or after-school care.” Medical records can also cause problems for EIC claimants. According to some examiners we interviewed, many taxpayers submit their child’s immunization records as the medical record to prove residency. Of the 21 examiners we interviewed, 18 did not accept immunization records as proof of residency. Some examiners explained that immunization records do not include the addresses of either the child or the taxpayer and, as such, cannot be accepted as proof of residency. Some of the 18 examiners said that they would accept a letter from a physician or an official record from a medical center showing the child’s address as well as the taxpayer’s address as proof that the taxpayer and child have the same address. However, according to the LITC representatives we interviewed, many low-income taxpayers have no ongoing medical care. In that regard, we reported in 1997 that 10.6 million children, living generally in lower- income working families, were uninsured in 1996. We further reported that, according to various national studies, a high proportion of these children’s parents worked for small employers that most likely did not offer health insurance; even when employers offered medical coverage, the amount that employees had to pay toward it to cover their families could have made health insurance unaffordable; these uninsured children were less likely to (1) have a usual source of care, (2) see a specific physician, (3) receive care from a single site, (4) have had a visit to a physician in the past year, and (5) ever have had routine care; and medical care for uninsured children was more likely to be sporadic and fragmented. Considering the medical coverage of low-income taxpayers, obtaining medical records that provide enough information to demonstrate that the taxpayer’s and child’s addresses were the same for at least one-half a year may not be easy. LITC representatives said that getting documentation, such as medical records or school records, to prove residency or living arrangements is not easy. For example, migrant workers would have a tough time getting school records from the schools their children attended throughout the year. As we reported in October 1999, during 1993-94, 78 percent of migrant crop worker families lived in two or more locations. Of the 10 LITC representatives we interviewed, 5 said that IRS should develop a standard form on which it could indicate the specific period of time for which IRS needed support. A taxpayer could then take the form to a school or a medical office, which could just write in the child’s and taxpayer’s address for the specific tax year IRS wanted. A few of the examiners we surveyed also said that they would benefit from such a standard form because it would give them the exact information they are looking for to recertify taxpayers. In 1998, examiners in one processing center started using a locally devised form that essentially served the purpose of the standard form suggested by the LITC representatives. Use of the form by examiners at the center was optional. Although no study was done of its effectiveness, anecdotal information indicates that examiners found it effective. One examiner who used the form estimated that one-half of the taxpayers to whom she sent the form were able to secure verification compared with the very few taxpayers who were able to secure verification without the form. Form 886-R states that a notarized statement from a child-care provider with certain detailed information about the child and the child-care provider would be considered acceptable evidence for residency. In our telephone survey, we asked examiners if they would accept a notarized statement from babysitters. We estimate that 62 percent of recertification examiners would not accept a notarized statement from a babysitter as evidence. The nonacceptance rate went up to 79 percent if the notarized letter was from a relative, such as a grandparent, who claimed to be the child’s babysitter. Several examiners said that they would not accept the notarized letter because the notary public verifies the signature but not the content of the letter. These examiners are correct in their understanding of the purpose of the notary public. However, a notarized letter from a child-care provider is a document listed on Form 886-R as acceptable proof of residency. We do not know how many taxpayers failed to recertify for the EIC because examiners would not accept a notarized letter from their babysitter. However, telling taxpayers that a notarized letter is acceptable and then refusing to accept it can frustrate taxpayers and subject them to unnecessary burden. Not only would those taxpayers have spent unnecessary time and effort writing the letters and locating a notary public, but they would have had to pay for the notary public’s service. Perhaps the more problematic issue related to evidence of child care is the general unwillingness of examiners to accept statements from relatives. Some examiners told us, for example, that they would accept child-care provider statements if they were from child-care centers, but expressed the belief that relatives would lie to help a taxpayer get the EIC. While we understand the hesitancy to accept a relative’s statement, refusing to accept child-care statements from relatives can pose a hardship for low- income taxpayers who use relatives for child care. The problem is compounded by the clear implication on Form 886-R that a “notarized statement from a child care provider” containing certain information, such as the child’s name, address, and dates of care for the entire tax year, is acceptable documentation to verify that a child lived with the taxpayer. Form 886-R says nothing to alert taxpayers that additional documentation may be needed if the child-care provider is a relative. Grandparents and other relatives play an especially large part in the care of poor preschoolers. In a March 1996 report entitled, Who’s Minding Our Preschoolers?, and an update issued in November 1997, the Census Bureau found that, in 1993 and 1994, relatives provided care for 62 percent of preschoolers in poor families while their mothers were working. This reliance on relatives, and especially grandparents, for child care was noted again in Census’ October 2000 report entitled Who’s Minding the Kids? Child Care Arrangements. Among other things, the report concluded, using Fall 1995 data, that “Fifty percent of preschoolers were cared for by a relative, with grandparents being the single most frequently mentioned care provider (30 percent).” In reports issued in May 1997 and November 1999, we discussed three major barriers that confront low-income persons in trying to find child care: availability; accessibility; and cost, especially for infants and toddlers. As discussed in these reports, many parents of low-income families are likely to obtain work at low-skill jobs, such as janitor or cashier, that operate on nonstandard schedules at workplaces that often do not offer child care during hours outside the traditional “9 to 5” work schedule; according to a 1999 Urban Institute paper, more than a quarter of low- income mothers work night hours; accessibility, such as transportation to get to providers, was especially problematic in rural or remote areas; and child care consumes a high percentage of poor families’ income. Regarding the cost of child care, the Census Bureau, in its October 2000 report, said that poor families, in 1995, who paid for child care “spent 35 percent of their income on child care, compared with 7 percent spent by nonpoor families.” Asking relatives to serve as child-care providers may be one way for poor families to limit the cost of child care. In that regard, the Census Bureau noted in its March 1996 report that preschoolers in poor families were 50 percent more likely to be cared for by their grandparents or other relatives than were preschoolers in nonpoor families. As noted in several places throughout the preceding discussion, low- income taxpayers face many problems that complicate their ability to satisfy the evidentiary requirements associated with the EIC recertification program. For example, many low-income taxpayers move from location to location for job reasons, have children who receive their medical care at hospital emergency rooms and have no medical insurance, and rely on relatives for free child-care service instead of taking their child to a child-care center. “Low-income taxpayers usually cannot afford to take time off from work to gather the documentation required. They often do not maintain financial records. Many have moved several times, making it even more difficult to provide what is asked of them. Obtaining such documentation may therefore involve long-distance calls, which are beyond the financial means of these taxpayers.” In general, the 10 LITC representatives who we talked with said that the recertification process was confusing to their clients and difficult to comply with. Some representatives noted that these problems had caused clients to give up on EIC recertification. One LITC representative said that for migrant workers, getting documentation might include writing to Mexico for birth certificates and other information. According to the representative, (1) some agencies or companies may charge a fee for documents; (2) requesting information through the mail would be difficult since many low-income taxpayers are illiterate; and (3) it takes time to gather support, and many taxpayers get discouraged and give up. Another LITC representative said his client gave up on the EIC because he had moved to another city for a new job and getting the records IRS wanted would require him to take time off from work and travel back to his old home town, neither of which he could afford to do. Some LITC representatives told us that some examiners were more lenient than others in assessing supporting documents and that third-party statements were not always treated the same. Four of the 10 LITC representatives we interviewed said that they have seen some of their client’s EIC claims denied because they could not substantiate that the child was a dependent. However, an EIC child does not have to be a dependent of the taxpayer to qualify that taxpayer for the EIC. As such, financial support, which is a factor in determining if a child qualifies as a taxpayer’s dependent, should not be a factor in determining if the child is a qualifying child for EIC purposes. “Since you have not verified that you are entitled to the exemption(s) claimed on your return, we have disallowed the deduction. Since the exemption for your child (or children) has been disallowed, you are not entitled to the earned income credit and/or child tax credit; therefore we have disallowed it/them.” Contrary to those statements, denial of a dependency exemption does not automatically mean that a taxpayer is not entitled to the EIC. In an effort to see how consistently IRS examiners assess evidence submitted by taxpayers, we presented 21 examiners at the four processing centers we visited with five scenarios involving differing sets of supporting documents. We obtained these documents from EIC recertification cases that we had reviewed. We deleted taxpayer-identifiable information, such as names, Social Security numbers, and addresses, from the documents before giving them to the examiners. The five scenarios were as follows: Case A—a single mother sending in copies of Social Security cards, the child’s birth certificate, and a school record listing the child’s address. Case B—a married couple, filing jointly, sending in copies of Social Security cards, the child’s birth certificate, and a locally devised IRS form signed by a school official verifying the child’s address. Case C—a single father sending in copies of Social Security cards, the child’s birth certificate, an immunization certificate showing the taxpayer as the parent and that the child received shots throughout the tax year, and a formal lease listing the taxpayer as the leasee but with no reference to the child. This case also included a notarized letter from the taxpayer’s grandmother stating that she provided child care for the taxpayer’s daughter while the taxpayer worked. The grandmother gave her own address and Social Security number. Case D—a single father sending copies of Social Security cards, the child’s birth certificate, an immunization record that did not have either the child’s or the taxpayer’s name, various monthly rental receipts not showing the full dates, and a letter from someone (without a title) written on apartment letterhead. Case E—a single father sending in copies of Social Security cards, the child’s birth certificate, a lease agreement not listing the child’s name, and a non-notarized letter from a babysitter stating that she cared for the taxpayer’s child throughout the year while the taxpayer was at work. The babysitter mentioned the salary she received from the taxpayer, but did not give her address, telephone number, or Social Security number. As seen in table 1, in no case did all examiners agree and, in some cases, their decisions varied significantly. Cases B, C, and E showed the most consistent decisions. Of the 19 examiners who accepted the Case B documentation, 7 said that they did so because the taxpayer was married and filed jointly and because the child lived with both parents and 1 said that he was swayed by the school verification (the other 11 did not explain their reasoning). Case C was almost unanimously rejected because examiners would not accept a notarized letter from the taxpayer’s grandmother who claimed to be the child-care provider. Although the grandmother’s letter had met all the specifications listed on Form 886-R, examiners still did not accept it as adequate proof of living arrangement. This is consistent with the results of our examiner interviews, which, as discussed earlier, showed that 79 percent of examiners would not accept such a letter. In Case E, we included a nonrelative babysitter’s letter as evidence of residence. Although the babysitter’s letter was not notarized and did not have the babysitter’s Social Security number or address, more examiners were willing to accept this letter than the notarized letter in Case C from a grandmother who gave her Social Security number as required by IRS. Examiners’ decisions varied significantly in Cases A and D. For Case A, three examiners pointed out that they would not accept the school record submitted because it pertained to a school year and not the tax year. A taxpayer would have to submit school records for 2 school years to cover the tax year in question. Some examiners who decided that the documents in Case D did not support recertification thought that the apartment letterhead on the letter saying that the taxpayer lived there looked too simplistic or fake to be trusted. They pointed out that almost anyone with a computer could easily come up with such a letterhead. IRS is aware of the need for more consistency in the evaluation and determination of EIC cases. According to the Director of Reporting Compliance in IRS’s Wage and Investment Division, IRS is in the process of developing a decision support tool to be used by examiners working EIC cases. Because all EIC audits involve the same basic issue—proving that the EIC claim satisfies all eligibility tests—the decision tool is to be used for all EIC cases, including those involving recertification. The goals of this project are to (1) automate the decision process examiners go through when performing audits, (2) reduce inconsistency in how EIC audits are conducted nationwide and subsequently improve the quality of examinations, and (3) decrease the time spent on EIC audits since the logic will be built into the tool to determine the appropriate questions for the individual case. IRS is planning to implement the first phase of this project and deliver training to examiners by May 2002. As described to us by the Director of Reporting Compliance, the first phase basically involves automating the current process. As such, it does not include a reconsideration of the documentation requirements discussed in this report. In that regard, for example, we noted, in reviewing preliminary information on the tool, that it included information to suggest that documentation of financial support was necessary to determine EIC eligibility. We advised the Director of our concerns in that regard, and he agreed to look into the matter. According to the Director, the project team is expected to take on the issue of what documentation taxpayers need to submit to prove their eligibility for the EIC during phase 2 of the project. In a related move, an IRS/Treasury task force was formed in February 2002 to comprehensively review the EIC program in general. The task force’s objective is to develop recommendations for achieving the objectives of the EIC program “while reducing taxpayer confusion and increasing the accuracy of the administration of benefits.” The task force was to complete its work within 4 months. Administering the EIC is not an easy task for IRS. IRS has to balance its efforts to help ensure that all qualified persons claim the credit with its efforts to protect the integrity of the tax system by guarding against fraud and other forms of noncompliance associated with the EIC. Furthermore, as with other provisions of the tax code, IRS must minimize the burden imposed on taxpayers yet ensure that it has a reasonable basis for judging whether taxpayers have properly claimed the credit. Although the recertification program provides a vehicle for combating EIC noncompliance, we believe that the program unnecessarily burdens taxpayers and provides inadequate assurance that IRS has a reasonable evidentiary basis for determining whether recertification applicants should be granted the EIC. As a consequence, taxpayers may be discouraged from claiming credits to which they are entitled or IRS may make poorly supported decisions in allowing or disallowing the credit. We identified several opportunities to make the recertification program less confusing to taxpayers and the decisions reached more accurate and consistent, without adversely affecting IRS’s ability to protect against EIC noncompliance. Two important forms used in the recertification process are problematic. Form 8862 is required of all taxpayers seeking recertification, yet 86 percent of IRS examiners who audit recertification cases say they do not use it. Since IRS is basically using Form 8862 only as a trigger for initiating the recertification process, we believe that a simpler version of Form 8862 could serve that same purpose. Form 886-R, which tells taxpayers what documentation they need to submit to prove their eligibility for the EIC, says nothing about documentation that taxpayers in nontraditional childrearing arrangements—which are likely common among the EIC recipient population—need to provide to demonstrate that they meet the EIC relationship test. At the same time, Form 886-R lists documentation that substantial majorities of examiners said they would not accept. The form states that a notarized statement from a child-care provider is acceptable evidence that a child lived with the claimant. However, 62 percent of examiners said that they would not accept such statements generally and 79 percent said that they would not accept such statements from relatives who provide child care. Other documentation listed on Form 886-R, while useful in gauging a taxpayer’s eligibility for the EIC, can lead to unnecessary taxpayer burden. IRS could minimize that burden and increase the probability of obtaining useful information by clarifying Form 886-R itself or providing simple supplemental forms that serve the same purpose. For instance, taxpayers would be less likely to submit school year attendance information rather than tax year attendance information if IRS were to develop a simple form that specified the period (e.g., January through December 2000) for which taxpayers must provide information. A taxpayer could then take the form to the school(s) for completion. When IRS has gathered information to judge whether a taxpayer should be recertified, examiners reviewing the information are likely to reach differing conclusions. The 21 examiners who reviewed five case scenarios we developed based on actual case files did not all agree on any scenario and, in some cases, reached widely varying judgments about whether the evidence was sufficient to support an EIC claim. Furthermore, 53 percent of the examiners we interviewed said that they have sometimes denied recertification because taxpayers did not provide documentation of financial support for the EIC-qualifying child—reflecting a fundamental misunderstanding of the law since financial support is not a criterion for the EIC. The results of our review suggest that IRS needs to reassess the evidentiary requirements for recertification and take steps to better ensure that examiners understand and more consistently apply the criteria for recertification. The inability to prove that qualifying children have lived with taxpayers for the requisite period of time—the residency requirement—has historically been a major reason why taxpayers are judged not eligible for the EIC. IRS examiners will continue to exercise discretion in determining whether documentation is sufficient even when using IRS’s proposed new decision support tool. Furthermore, each of the three types of acceptable documentation cited on Form 886-R for establishing residency can prove problematic for an EIC claimant. Therefore, the chances of a claimant being able to prove to an examiner’s satisfaction that a child’s living arrangement meets EIC requirements might be enhanced if taxpayers were encouraged to send IRS various types of documentation. Form 886-R, as currently worded, encourages taxpayers to send in just one type of documentation—be it school records, medical records, or statements from a child care provider—which can leave an examiner with less than conclusive evidence. If taxpayers provided more than one document, an examiner could disregard a document that seemed questionable but possibly find one or more of the other documents persuasive. Also, a pattern of evidence across several corroborating documents may provide a more meaningful basis for examiners to judge residency. Although our review was directed at the EIC Recertification Program, many of our findings would also apply to other IRS audits of EIC claims because IRS’s requirements for proving eligibility for the EIC apply to all EIC claimants, not just those who have to recertify. In that regard, while we understand that it is not possible, and probably not desirable, to eliminate all subjectivity from examiners’ decisions about EIC eligibility, there is room to bring more consistency to that process—not only consistency among examiners but also consistency between the requirements of the tax law (e.g., no financial support test to claim the EIC) and examiners’ practices. IRS recognizes the need for more consistency and is working to develop a decision support tool for EIC audits. More broadly, an IRS/Treasury task force has been charged with developing recommendations for making the overall EIC program less confusing to taxpayers and easier for IRS to administer. The results of our work should be useful to IRS in developing the decision support tool and to the task force in deliberating on possible changes to the EIC program. We recommend that the commissioner of Internal Revenue reassess the evidentiary requirements for recertification. As part of that reassessment, we recommend that the commissioner do the following: Revise Form 8862 to make it a simple request for recertification that IRS can use to trigger the recertification process and eliminate all of the information that taxpayers are now asked to provide on the form. Revise Form 886-R (and similar forms used for other EIC audits) to clarify that taxpayers who are seeking EIC recertification do not have to demonstrate that their EIC-qualifying child is a dependent to qualify for the EIC; help taxpayers understand what documentation they must provide (such as birth certificates, adoption papers, etc.) to establish their relationship with the EIC-qualifying child, especially when the child is not their natural born son or daughter; eliminate the need to have the statement from a child-care provider notarized, since a notary public does not verify the content of the statement and most examiners placed no validity on the notary stamp; and encourage taxpayers to submit more than one type of document to demonstrate that the EIC-qualifying children lived with them. If IRS is not willing to accept a relative’s child-care statement as evidence that a child lives with the taxpayer, make that clear on Form 886-R, on similar forms used for other EIC audits, and in the EIC decision support tool and suggest additional evidence that a taxpayer might provide. Whatever IRS’s official position on statements from relatives, ensure that examiners are aware of that position and apply it consistently. Develop a standard form that taxpayers can give to a school or health-care provider that specifies the information needed and on which examiners can indicate the period of time for which that information is needed. If IRS decides not to develop a standard form, revise Form 886-R to clearly remind taxpayers that records for parts of 2 school years are needed to document a living arrangement for the tax year. Take appropriate steps to ensure that the new EIC decision support tool does not continue the inappropriate linkage of financial support to decisions on EIC eligibility. In conjunction with the establishment of the EIC decision support tool, which is intended to improve consistency among EIC examinations, provide examiners with the training needed to better ensure consistent and accurate decisions. As part of the training, emphasize to examiners the difference between the eligibility requirements for an EIC-qualifying child and a dependent. We requested comments on a draft of this report from IRS. We obtained written comments in an April 10, 2002, letter from the commissioner of Internal Revenue (see app. V). The commissioner cited several steps being taken with respect to the EIC, including the development of the decision support tool and convening of the IRS/Treasury task force, referred to earlier, and the redesign of EIC taxpayer notices. The commissioner said that the IRS/Treasury task force would consider the findings discussed in this report in evaluating “legislative and administrative solutions to recertification problems.” With respect to our recommendation that IRS revise Form 8862, the commissioner said that the Wage and Investment Division will study the use of Form 8862 in EIC recertifications and the examination of related issues. Based on the results of that study and our findings, IRS “will evaluate possible revisions to the form that will make communications clearer, reduce taxpayer burden, and aid the recertification and examination processes.” IRS anticipates completion of this study by June 2003. Such a study, with the objectives cited by the commissioner, would be responsive to our recommendation. Regarding our recommendation that IRS revise Form 886-R and similar forms, the commissioner said that IRS plans to have revised forms that incorporate feedback from taxpayers and tax practitioners by the 2003 filing season. We agree wholeheartedly with the plan to obtain feedback from taxpayers and practitioners and look forward to seeing the results of these efforts. However, the commissioner’s response did not clearly indicate that the intended revisions to the forms would reflect the specific changes we recommended. We encourage the commissioner to ensure that the recommended changes are made. In response to our two recommendations relating to child care provided by a taxpayer’s relative, the commissioner said the following: “A child-care provider’s statement by itself may not be sufficient to verify eligibility. In that instance, the taxpayer will need to provide additional collaborating evidence to support his or her claim. We will show examples of this evidence on Form 886-R.” IRS will enhance examiner awareness of IRS’s official position on this issue and the consistency of its application through the decision support tool, in examiner training and the Internal Revenue Manual, and during quality reviews. The actions referred to by the commissioner would be responsive to our recommendations. With respect to our two recommendations about helping ensure that taxpayers obtain documentation for the proper time period, the commissioner said that IRS was revising Form 886-R to clearly remind taxpayers that records for parts of 2 school years are needed to document a living arrangement for the tax year. That would be responsive to our recommendation. Finally, the commissioner said that the new EIC decision support tool has been revised to incorporate our recommendation that IRS take appropriate steps to ensure that the new tool does not continue the inappropriate linkage of financial support to decisions on EIC eligibility. The new tool is to be rolled out nationwide in May 2002. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means; the Ranking Minority Member of this Subcommittee; the secretary of the Treasury; the commissioner of Internal Revenue; the director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. This report was prepared under the direction of David J. Attianese, Assistant Director. Other major contributors to this report are acknowledged in appendix VI. If you have any questions, contact Mr. Attianese or me on (202) 512-9110. The Congress established the earned income credit (EIC) in 1975 to offset the impact of Social Security taxes paid by low-income workers and to encourage low-income persons to choose work over welfare. A significant number of taxpayers are affected by EIC. In 2001, 18.7 million taxpayers claimed a total of $31 billion in EIC. Since 1995, we have identified EIC noncompliance as one of the high-risk areas within IRS. EIC noncompliance has resulted in billions of dollars in EIC claims that IRS paid, but should not have. In its most recent EIC compliance study, IRS determined that of an estimated $31.3 billion in EIC claims for tax year 1999, between $9.7 billion and $11.1 billion was overclaimed. After deducting the estimated amount of those overclaims that it recovered during the processing of returns and through enforcement programs, IRS determined that between $8.5 billion and $9.9 billion in tax year 1999 EIC claims was paid out that should not have been. The Taxpayer Relief Act of 1997 reflects the Congress’ concern about the level of EIC noncompliance. Among other things, the 1997 act amended the Internal Revenue Code to provide that taxpayers who are denied EIC during an IRS audit are ineligible to receive the EIC in subsequent years unless they provide documentation to demonstrate their eligibility. IRS developed a recertification program designed to deal with this new requirement. Taxpayers were first required to recertify, based on a 1997 audit, when submitting their 1998 tax returns. Tax year 1998 returns filed in 1999 were the first returns to which affected EIC claimants would have to attach a Form 8862 for recertification. In preparation for that event, IRS provided little information to taxpayers on what to expect when they sought recertification. IRS issued recertification guidelines to service center examiners at the beginning of the 1999 filing season but, according to examiners we interviewed, gave no formal training on recertification to examination staff. As described in appendix II, we and TIGTA found that service centers did not consistently follow the recertification guidelines, and a number of forms and letters IRS used for recertification contained inconsistent or irrelevant information. IRS’s outreach and correspondence to taxpayers and its training of examiners have improved since then. For example, IRS began to distribute basic information on the recertification program through its web site on the Internet, Publication 596 (Earned Income Credit) was revised to include a section on what taxpayers need to do if they have been disallowed the EIC as a result of audit, some changes were made to improve the quality of IRS correspondence, and more guidance was provided to examiners. “The law now requires when we deny EIC , we must also deny it for any succeeding years unless you provide information showing you are entitled to the credit. You must, therefore, complete and attach Form 8862, Information to Claim Earned Income Credit After Disallowance, to the next return on which you claim EIC. While we determine if you are entitled to the credit, we will delay any refund due. If you claim EIC on your return without attaching a completed Form 8862, we will disallow the credit. You can get Form 8862 at most locations where tax forms are available. You will also be able to submit Form 8862 electronically when you file federal tax return…” None of the 1999 IRS publications, forms, and instructions regarding the EIC provided any information on the recertification process or requirements other than the need to file Form 8862. Even IRS’s publication of the need to file Form 8862 was not completely effective. An internal IRS study found that of the 312,000 required-to-recertify taxpayers, 38 percent (118,989) claimed the EIC again for tax year 1998. However, nearly 46 percent of these taxpayers (54,194 of the 118,989) did not attach the Form 8862 with their returns and thus were automatically denied the EIC. Within IRS, there was also confusion over the recertification process. At the beginning of the 1999 filing season, IRS issued recertification guidelines for Service Center examiners, but examiners we interviewed said that there was no formal training for examiners on recertification. With some exceptions pertaining to which EIC children an examiner should seek verification and how the recertification indicator should be handled after a taxpayer has been recertified, the recertification process essentially follows IRS’s normal process for conducting examinations via correspondence. During our review of the 1999 filing season, we found that (1) form letters that IRS sent to taxpayers seeking recertification contained inconsistent and irrelevant information; (2) form letters that IRS sent to taxpayers asked for information beyond that specified in the recertification guidelines; and (3) service centers were not consistently following the recertification process. A more detailed review by TIGTA disclosed, among other things, that (1) the indicator used to identify taxpayers who must recertify was not always removed accurately; (2) some suspended refunds were not released timely; (3) recertification audits were not always processed in a timely manner; (4) not all recertification determinations were accurate; and (5) IRS correspondence was unclear. TIGTA attributed these problems, in part, to (1) the IRS correspondence used did not clearly explain how the program worked or what was required for the taxpayer to be recertified and (2) IRS did not ensure that employees received, understood, and implemented recertification procedures. (See app. II for TIGTA’s findings and IRS’s corrective actions.) Outreach to taxpayers for filing seasons 2000 and 2001 improved compared with 1999. For example, in 2000, IRS revised Publication 596 (Earned Income Credit) by expanding the section on EIC recertification. The publication provided examples of who would be required to file Form 8862 and alerted taxpayers that they may have to provide additional documentation before being recertified. In 2001, IRS included questions on EIC recertification in the “Frequently Asked Questions” section of its Web site and further expanded the chapter on EIC recertification in Publication 596. The 2001 improvement in outreach was especially critical because the Ticket to Work and Work Incentives Improvement Act of 1999 (P.L. 106- 170) had tightened the definition of an eligible foster child for EIC purposes. IRS publicized this change on its Web site, in Publications 596 and 553 (Highlights of 2000 Tax Changes), and on Schedule EIC. Recertification training for examiners also improved compared with 1999. EIC training videos that were sent to IRS’s processing centers for the 2001 filing season included materials on recertification. IRS also incorporated the recertification guidelines into the Internal Revenue Manual section dealing with correspondence examinations in an effort to improve program consistency. Forms and letters were revised and examiners were instructed, via IRS’s internal Taxpayer Service Electronic Bulletin Board, to correct improper handling of recertification cases. Since July 1999, we and the Treasury Inspector General for Tax Administration (TIGTA) have reported several concerns about the EIC Recertification Program and have made several recommendations. In response to those recommendations, IRS implemented several corrective actions. The recommendations and corrective actions are described in tables 2 and 3. To help identify any problems taxpayers may have in understanding and complying with the EIC recertification process and determine how consistently IRS examiners assess evidentiary support, we conducted a telephone survey of IRS examiners doing recertification audits. We obtained from IRS a list of all examiners who were working on EIC recertification cases as of April 2001. From that list of 323 examiners, we selected a simple random sample of 105 examiners. We found that 12 of those 105 examiners were not doing recertification audits at the time of our survey and 3 others were unavailable for us to interview during our survey timeframe. Therefore, our survey results represent the views of about 277, or about 97 percent, of the estimated 286 examiners doing recertification audits at the time of our survey. The estimates we made from our telephone survey and their 95-percent confidence intervals are provided in table 4. In addition to those named above, Karen Bracey, Tara Carter, Art Davis, Ben Douglas, Ann Lee, Susan Mak, Anne Rhodes-Kline, Clarence Tull, and James Ungvarsky made key contributions to this report.
The earned income credit (EIC) is a refundable tax credit available to low-income, working taxpayers. Administering the EIC is not an easy task for the Internal Revenue Service (IRS). IRS has to balance its efforts to help ensure that all qualified persons claim the credit with its efforts to protect the integrity of the tax system and guard against fraud and other forms of noncompliance associated with EIC. Although IRS made some changes to its correspondence, improved its examiner training, and expanded taxpayer outreach, certain aspects of the recertification process continue to cause problems for taxpayers. Since the inception of the EIC Recertification Program in 1998, IRS has taken steps to improve some of the letters and forms it uses to correspond with taxpayers about the program. However, two standard forms that IRS uses in corresponding with taxpayers as part of the recertification process can lead to unnecessary taxpayer burden. IRS asks taxpayers to submit certain information as part of the process that can be difficult for some EIC claimants to obtain or is inconsistent with what many examiners consider acceptable.
FFELP is the largest source of federal financial assistance to students attending postsecondary institutions. In fiscal year 1994 students received about $23 billion in FFELP loan commitments, including about $14.8 billion in subsidized Stafford loans. The Department of Education pays interest to lenders on the behalf of subsidized Stafford loan borrowers while they are in school and during a subsequent 6-month grace period. This interest benefit is not available to borrowers for other FFELP loans. The private lenders that provide these loans may not discriminate on the basis of race, national origin, religion, sex, marital status, age, or handicapped status but, according to a Department policy official, may deny loans to eligible borrowers who do not meet their lending standards. Lenders may, for example, deny loans to students attending proprietary (for profit, typically trade and vocational) institutions or schools with high loan default rates. They may also withdraw from the program. Guaranty agencies, designated state or private not-for-profit entities, help administer FFELP by, for example, reimbursing lenders if borrowers fail to repay their loans. If an eligible borrower experiences difficulty obtaining a subsidized Stafford loan, guaranty agencies are required to provide one. The agencies may do so either directly or through a lender authorized to make LLR loans. Guaranty agencies must provide subsidized Stafford LLR loans to eligible students that have been denied a loan by two or more participating lenders. This requirement does not apply to unsubsidized Stafford loans. Several major changes to the subsidized loan program may influence the availability of loans. The 1992 amendments, for example, reduced the interest revenue lenders can receive from subsidized loans, and the 1993 Student Loan Reform Act reduced the rate at which guaranty agencies generally reimburse lenders if borrowers fail to repay their loans. In addition, the 1993 act established FDSLP to provide loans to students from the Department of Education rather than from private lenders. This program is expected to provide at least 60 percent of federal student loans by the 1998-99 academic year. Such reductions in student loan revenue and competition from the direct student loan program could reduce the profitability of student loans and reduce lenders’ willingness to offer new loans to students. In response to our questionnaire and in discussions with us, participants in the subsidized Stafford loan program expressed differing views on the risk that eligible students could be denied loans through the end of fiscal year 1995. Most but not all guaranty agencies have arrangements in place to provide loans to students that have difficulty obtaining loans. The Department has several options for ensuring access if guaranty agencies are not able to do so without assistance. As some lenders become selective in making Stafford loans or stop participating in the program, many lenders and guaranty agencies expect some eligible subsidized Stafford loan borrowers to be denied loans by one or more lenders. We asked program participants to describe the risk that 5 percent or more of eligible borrowers will be refused a subsidized Stafford loan by one or more lenders through the end of fiscal year 1995. Department officials with whom we spoke foresaw little or no risk that lender refusals to make loans would be widespread. Sallie Mae officials also doubted that as many as 5 percent of eligible borrowers would be denied a loan. The President of the Consumer Bankers Association said that there is “some” risk that 5 percent or more would be denied a loan. The guaranty agencies that responded to our questionnaire had a wide range of views on this question. (See fig. 1.) Thirteen of these agencies rated the risk “moderate,” “great,” or “very great,” while 16 agencies said that there is “little or no risk.” The remaining 13 agencies indicated “some risk.” One responded that it did not know. Some Risk (13) 10% Moderate Risk (4) 10% Great Risk (4) Very Great Risk (5) Concerns that some students will have difficulty obtaining access to loans evolve from lenders’ deciding to leave the program or to become selective in making student loans. Additional departures of lenders from FFELP would represent a continuation of a trend begun in the mid-1980s. For example, during fiscal years 1984-1986, between 11,000 and 12,000 lenders participated in FFELP. The number of participating lenders has declined each year since, in part reflecting the general trend of mergers and consolidations in the financial community. By fiscal year 1993 the Department counted fewer than 7,500 active lenders. In response to our questionnaire, 28 agencies said that one or more of their lenders—lenders whose loans they guarantee—had indicated they plan to stop making subsidized Stafford loans sometime in the future. Six agencies said that this included one of their five largest loan volume lenders. Three of these agencies referred to the same lender. In addition to lenders that may stop making loans, concerns about loan access may arise if lenders choose to become more selective about making loans. Twenty agencies responded that one or more of their lenders planned to stop making loans to students attending institutions with student loan default rates that they—the lenders—consider too high. Most of these agencies said that 5 or fewer lenders would stop making loans, but one agency said that more than 200 lenders would stop. Most guaranty agencies—40 of the 43 respondents—had arrangements to provide LLR loans to eligible students. These arrangements included agreements with state secondary markets or other participating lenders to provide loans. Through September 30, 1993, the volume of loans provided through these arrangements had been small. More than half of the agencies said that they did not guarantee any LLR loans in fiscal years 1992 or 1993. The 16 agencies that provided data on LLR loans they made in fiscal year 1993 had an aggregate LLR loan volume of $32 million—about 0.3 percent of the $12.5 billion of subsidized Stafford loans made in fiscal year 1993. Twenty-six of the responding guaranty agencies responded to our question concerning the estimated capacity of their LLR arrangements. Twenty-two agencies estimated that they could have provided about $1.8 billion in LLR loans in fiscal year 1994. This represents an amount that is more than 50 times the total LLR loan volume for fiscal year 1993, and about one-eighth of total subsidized Stafford loan volume in fiscal year 1994. Three agencies cited “unlimited” LLR capacity. The largest guaranty agency, United Student Aid Funds, Inc., said that it has no set maximum on its LLR capacity. Nearly all of the agencies indicated they had LLR arrangements, and two-thirds had plans that the Department had approved. Department officials said that six agencies had not submitted plans for approval. Plans from the remaining agencies were either pending approval, or the plans submitted had been denied approval and the agencies had not resubmitted their plans. (See table 1.) Thirty-one guaranty agencies responded that they had agreements with lenders to provide LLR loans, but only 20 agencies had such agreements in writing. All LLR agreements but one either allow lenders to withdraw from their LLR commitments at any time or do not specify withdrawal terms. Four agreements specified that the arrangements applied for a specific time period, ranging from 12 to 18 months. Department officials told us they have several tools to help ensure that eligible borrowers have access to guaranteed student loans. They can assist guaranty agencies in recruiting lenders to provide LLR loans, direct Sallie Mae to make the loans, provide federal advances (interest-free loans) to guaranty agencies to enable them to make LLR loans, or make loans through the direct loan program. The Department is also developing a data reporting mechanism that, according to Department officials, will improve its monitoring of guaranty agencies’ financial posture. It has proposed requiring each agency to submit annual 5-year financial projections. The Department recognizes that with the implementation of FDSLP, FFELP will require fewer guaranty agencies as the number of direct loans increases in relation to the number of guaranteed loans. Therefore, the Department is—and plans to continue—encouraging consolidation among guaranty agencies through mergers and takeovers in the belief that greater efficiency can be achieved through economies of scale. During the process of this consolidation, lenders could be left without guarantee services being available. In anticipation that such a condition may materialize, in 1994 the Department contracted with the private, nonprofit Transitional Guaranty Agency to provide loan guarantee functions, as the Department determines necessary. For those guaranty agencies having difficulty getting lenders to make student loans, particularly LLR loans, Department officials told us they can assist the agencies to recruit lenders or seek commitments from current LLR lenders to make more LLR loans. As of November 1, 1994, the Department had assisted one agency. According to Department and agency officials, it helped the California Student Aid Commission identify lenders to provide LLR loans to eligible borrowers at certain schools. The Department and Sallie Mae signed an agreement through which Sallie Mae could provide up to $200 million of LLR loans through fiscal year 1995. This amount can be increased by mutual written agreement between the Department and Sallie Mae. As of December 6, 1994, Sallie Mae made 149 unsubsidized Stafford LLR loans and 62 subsidized Stafford loans that were guaranteed by the Texas guaranty agency. Through the Higher Education Act of 1965, as amended, the Department can make federal advances to guaranty agencies to provide loan capital needed to make LLR loans. The statute also provides authority for Sallie Mae to make advances to guaranty agencies to enable them to make LLR loans. In addition, with the implementation of FDSLP, the Department has the option of making direct loans to students if guaranteed loans are unavailable. Many uncertainties make predictions about the availability of loans in future years very difficult. For example, it is unclear whether guaranty agencies’ LLR arrangements will ensure access because many agencies’ LLR agreements allow lenders to withdraw at any time. It is also unclear to what extent postsecondary institutions will increase their participation in FDSLP. As institutions elect to participate in FDSLP, the demand for FFELP loans will decline, which may in turn encourage additional lenders to withdraw from the program or become more selective in making loans. On the other hand, the demand for LLR loans may decline if schools whose students are obtaining LLR loans switch to FDSLP. It is also uncertain how the actions of the 104th Congress, whose leadership has pledged to constrain federal spending, might affect federal student loan programs and the Department’s ability to ensure access. Generally FFELP administrators foresaw little or no risk of widespread loan access problems through fiscal year 1995, the period covered by our review. However, several respondents to our questionnaire foresaw more risk. Guaranty agencies have arrangements to provide LLR loans to eligible students that encounter difficulties in obtaining a loan, although most of them allow lenders to discontinue their commitments with little or no advance notice. However, if such arrangements prove inadequate, the Department has several options to ensure students’ access to subsidized loans, which have proved adequate in the few instances in which they were used. It is too early to know with certainty if lenders will continue to provide subsidized loans to eligible borrowers, and this issue may need to be reevaluated in the future. We did our review from March 1994 through January 1995 in accordance with generally accepted government auditing standards. As arranged with your offices, we did not obtain agency comments on this report, although we did discuss its contents with Department program officials. These officials generally agreed with the information presented in the report. They did offer some technical suggestions, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions regarding this report. Major contributors include Joseph J. Eglin, Jr., Assistant Director, (202) 512-7009; Charles M. Novak; Benjamin P. Pfeiffer; Dianne L. Whitman; and Aaron C. Chin. The U.S. General Accounting Office (GAO) is conducting a congressionally requested study on the availability of guaranteed student loans to borrowers. As part of this study, we are asking all guaranty agencies to complete this questionnaire. Specifically, we are asking your agency to provide information about the extent of lenders’ willingness to continue providing subsidized Stafford student loans and your agency’s lender-of-last-resort (LLR) program. Please provide the following information about the person responsible for completing this questionnaire, so that we will know who to call to clarify information, if necessary. This questionnaire should be completed by the person who is most knowledgeable about lender participation and lender-of-last-resort programs. If this person is unable to respond to all of the questions, he or she may wish to seek the help of others in completing this questionnaire. ) This questionnaire asks for information related to only subsidized Stafford loans and by federal fiscal year (FFY). Please include all guarantee activity for these loans by your agency, except guarantees for which your agency provides guarantee services on behalf of another agency. 1.Consider all of your agency’s guarantee activity, except guarantees for which your agency provides guarantee services on behalf of another agency. In total, about how many lenders either originated or purchased subsidized Stafford loans guaranteed by your agency during federal fiscal year (FFY) 1993 (October 1, 1992, through September 30, 1993)? (ENTER NUMBER) If you have any questions, please feel free to call collect either Dianne Whitman at (206) 287-4822 or Ben Pfeiffer at (206) 287-4832. Please return your completed questionnaire within 5 days of receipt, in the enclosed preaddresssed business reply envelope or by FAX. If the envelope is misplaced, please send your questionnaire to the address shown below. 2. About how many of these lenders, if any, have informed your agency that they will stop providing subsidized Stafford loans sometime in the future? (ENTER NUMBER; IF NONE, ENTER ’0’) Dianne Whitman U.S. General Accounting Office Jackson Federal Building, Room 1992 915 Second Avenue Seattle, WA 98174 _____________ lenders n=42 range: 0-500 mean=19 median=2 3. Have any lenders informed your agency that, by the end of FFY 1995 (September 30, 1995), they will no longer be providing subsidized Stafford loans to students attending post-secondary institutions with default rates that the lenders regard as too high? (CHECK ONE; IF YES, ENTER NUMBER) 5. How many lenders? (ENTER NUMBER) 2. [] Some risk ________ lenders n=18 range: 1-200 mean=21 median=4 3. [ ] Moderate risk 4. [ ] Great risk 5. [ ] Very great risk 6. [ ] Don’t know ____________________________________________________________________________________________________________ 6. In your opinion, is each of the following factors listed below a major reason, minor reason, or not a reason why your lenders may either stop providing or provide fewer subsidized Stafford loans? (CHECK ONE FOR EACH FACTOR) Extent of change in the program Increased complexity of the program Dissatisfaction with the Department of Education’s management of the program Reduced interest rate and special allowance payments from the Department of Education Reduced interest rate paid by new borrowers The 0.50 percent loan fee paid by lenders Reduction in claims reimbursement rate from 100 to 98 percent (except for LLR and exceptional performance loans) Concern about implications of the Federal Trade Commission (FTC) holder rule Expectations that lenders’ market share will decline due to direct lending Concern about "windfall" profits provision Concern about audits of lenders and resulting liabilities Other (PLEASE SPECIFY) _________________________________________________ 8. Does your agency plan to change its arrangements for insuring access to loans? (CHECK ONE) Regardless of whether or not the Department of Education has approved your LLR plan, what arrangements, if any, does your agency currently have in place for ensuring that eligible borrowers who have been denied a subsidized Stafford loan will receive a loan? (CHECK ALL THAT APPLY) 1. [] Yes (CONTINUE) 2. [] No (GO TO QUESTION 10) [ ] We make these loans and hold them as lender-of-last- resort. 9. [ ] We make these loans as a lender-of-last-resort and sell them to the state secondary market. Please indicate if your agency plans to make each of the following changes to its arrangements for insuring access to loans? (CHECK YES OR NO FOR EACH) The guaranty agency plans to ... [ ] We make these loans and sell them to a secondary market other than the state secondary market. 1. [] We have or arrangement(s) with the state or a state solicit additional lenders currently not participating in the LLR program. secondary market which makes these loans. 2. [] We have arrangement(s) with lenders other than those mentioned above who make these loans. --> How many lenders? arrange for commitment by lender(s) currently participating in the LLR program to increase the amount of LLR loans it is (they are) willing to make. 3. _________ lenders n=17 range: 1-14 mean=3 median=3 increase the capacity of the guaranty agency to make LLR loans. 4. [] We refer borrowers to lenders willing to make the loans turn our LLR responsibilities over to the Department of Education or another entity. without a lender-of-last-resort designation. [ ] We have other arrangements. (PLEASE SPECIFY) 5. do something else. (PLEASE SPECIFY) [ ] We currently have no arrangements in place. 10. Did your agency provide guarantees for any lender-of-last-resort loans originated during either FFY 1992 or FFY 1993? (CHECK ONE) 1. [] Both FFY 1992 and 2. (CONTINUE) 3. 4. [] Neither FFY 1992 nor FFY 1993 (GO TO QUESTION 12) 11. What was the original gross principal dollar amount of lender-of-last resort loans that your agency guaranteed during FFY 1992 and during FFY 1993? If you cannot provide the data by federal fiscal year, please enter the dollar amount and the annual time period for which you do have information. (ENTER DOLLAR AMOUNT; IF NONE, ENTER ’O’) FFY 1993 $ ___________ n=16 range: $0-$14,026,992 mean=$1,763,254 median=$172,847 sum=31,738,576 [ ] Can only provide for different time period, which is_____________________ [ ] Data not available for any time period 12. What is the projected gross dollar amount of your agency’s 13. Consider your agency’s projected FFY 1994 dollar amount for subsidized Stafford loans. What is the maximum amount that could be handled through your agency’s current LLR loan arrangements? (ENTER DOLLAR AMOUNT; IF NONE, ENTER ’0’) $ ______________ n=26 range: $120,000-$1,200,000,000 mean=$83,260,670 median=$20,500,000 sum=$1,831,734,729 Also 3 agencies indicated "unlimited" and one indicated "no set maximum" [] Don’t know If the dollar amount of LLR loans were to become greater than could be handled through your agency’s current arrangements, please indicate if your agency would take each of the actions below and if yes, how likely it is, or not, that this action would succeed in increasing access to loans. (IF YES, CHECK ONE FOR EACH ACTION) FOR EACH) Take Action? Solicit additional lenders not currently in LLR program to provide loans Seek additional guaranty agency funding from non-federal sources to make loans directly Request that the state secondary market seek additional funding to enable it to either make LLR loans or purchase them from the guaranty agency Ask the Department of Education to advance funds to enable the guaranty agency to make these loans Ask the Department of Education to request that Sallie Mae make these loans Ask the Department of Education to make the loans directly Ask the Department of Education for other forms of assistance (PLEASE SPECIFY) _______________________ Make other arrangement(s): (PLEASE SPECIFY) ________________________ Does your agency currently have any verbal (informal) or written (either informal or formal) agreements for LLR loans with participating lenders? (CHECK ONE) 20. In any of these written agreements, can your lenders refuse to make a lender-of-last-resort loan to an eligible borrower: (CHECK ONE FOR EACH; IF NOT SPECIFIED, CHECK ’N/S’) [] Verbal only (GO TO QUESTION 21) [ ] Written only (CONTINUE) [] Written and verbal (CONTINUE) 1. ... when the loan amount is below a minimum level? [] Neither (GO TO QUESTION 21) 2. 16. With how many participating lenders does your agency have written agreements for lender-of-last-resort loans? (ENTER NUMBER) ... when the loan would cause the lender to exceed a limit on the maximum number of loans? _____________ lenders n=19 range: 1-14 mean=2 median=1 [ ] Check here if this is an estimate [ ] Data not available 3. ... when the loan would cause the lender to exceed maximum amount of lender-of- last-resort loans it will make? Do the terms of any of these written agreements allow the lenders to withdraw from the agreements at any time? (CHECK ONE) 4. ... under an other condition? (PLEASE SPECIFY) [] Yes, in all cases [ ] Yes, in some cases [ ] No, may not withdraw [ ] Withdrawal terms not specified 21. Do any of these written agreements specify a length of time to which the terms apply? (CHECK ONE) Please provide below any comments that you have about this study, this questionnaire, or the LLR program. What proportion specify a length of time? (ENTER PERCENTAGE) [] No (GO TO QUESTION 20) For what length of time do most of these written agreements apply? (ENTER NUMBER OF YEARS) THANK YOU FOR YOUR HELP! The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how recent legislative changes have affected the availability of federally subsidized Stafford student loans, focusing on: (1) the arrangements guaranty agencies have to provide loans to eligible borrowers; and (2) Department of Education efforts to ensure continued student access to subsidized Stafford loans. GAO found that: (1) eligible students will have difficulty obtaining access to subsidized loans due to recent changes to the Federal Family Education Loan program (FFELP) and the introduction of the Federal Direct Student Loan program; (2) most guaranty agencies have made arrangements to provide loans to students that have difficulty obtaining loans; (3) the Department of Education has made arrangements with Sallie Mae and a new guaranty agency to make loans if guaranty agencies are unable to provide guarantees to lenders; (4) although it is difficult to predict how these arrangements will affect loan access after fiscal year (FY) 1995, FFELP administrators believe that there is little or no risk of widespread loan access problems through FY 1995; (5) Education has several options to ensure students' access to subsidized loans if its financial arrangements prove inadequate; and (6) the issue of student loan access may need to be reevaluated in the future, since it is too early to know whether lenders will continue to provide subsidized loans to eligible borrowers.
In the mid-1990s, Congress directed DOE to develop the Stockpile Stewardship Program to provide a single, highly integrated technical program for maintaining the continued safety and reliability of the nuclear weapons stockpile. Stockpile stewardship comprises activities associated with conducting nuclear weapons research, design, and development; maintaining the knowledge base and capabilities to support nuclear weapons testing; and assessing and certifying nuclear weapons safety and reliability. Stockpile stewardship includes operations associated with producing, maintaining, refurbishing, surveilling, and dismantling the nuclear weapons stockpile. The Stockpile Stewardship Program’s objectives were updated as a result of the 2010 Nuclear Posture Review, which establishes the U.S. nuclear policy for the next 5 to 10 years, including the nation’s nuclear weapons stockpile requirements. The Nuclear Posture Review and the Stockpile Stewardship Program reinforce the New Strategic Arms Reduction Treaty between the United States and Russia. As part of this treaty, the United States has agreed to reduce the size of its strategic nuclear weapons stockpile from a maximum of 2,200 to 1,550 weapons, with the remaining weapons in the stockpile continuing to be an essential element of U.S. defense strategy. Nuclear stockpile requirements include a pit production capacity that is defined by estimating the number of pits NNSA needs to manufacture annually to effectively support the nuclear weapons stockpile. The demand for pits has fluctuated over the past decade for various reasons. Until 2005, NNSA planned to produce pits in a large-scale manufacturing plant to be built called the Modern Pit Facility, which would have increased pit production capacity per year to a range of 125 to 450 pits. This project was terminated and, at around the same time, NNSA began to study a new approach for modernizing the stockpile, called the Reliable Replacement Warhead program, which would have produced 50 pits per year and which was also short-lived. Through this program, NNSA would have designed new weapon components, including pits, to be safer and easier to manufacture, maintain, dismantle, and certify without nuclear testing. Since 2008, NNSA’s guidance has established pit capacity for future production at about 20 pits per year, with an upper range limit of 80 pits per year. In addition, NNSA has recently determined that pit lifetimes are longer than anticipated and that it may increase the reuse of existing pits, reducing the demand for newly manufactured pits. Currently, pit capacity requirements are uncertain and still in flux. Demand may again fluctuate as a result of the Nuclear Posture Review and changes to the Stockpile Stewardship Program. For example, there are still unknowns in implementing the Nuclear Posture Review and modernization work on each nuclear weapon type may require a varied number of new pits. To execute the activities to maintain and refurbish the nation’s existing nuclear weapons stockpile, NNSA oversees eight sites that comprise its nuclear security enterprise—formerly known as the nuclear weapons complex—which includes three national weapons laboratories, four production plants, and a test site, all of which carry out missions to support NNSA’s programs. One of these sites, Los Alamos National Laboratory, plays a crucial role in carrying out NNSA’s maintenance of the nuclear weapons stockpile, including (1) production of weapons components, (2) assessment and certification of the nuclear weapons stockpile, (3) surveillance of weapons components and weapon systems, (4) assurance of the safe and secure storage of strategic materials, and (5) management of excess plutonium inventories. Los Alamos was established in 1943 during the Manhattan Project in northern New Mexico. It is a multidisciplinary, multipurpose institution primarily engaged in theoretical and experimental research and development. A significant portion of Los Alamos’ work is focused on ensuring that nuclear weapons stockpile needs are met. Since 2000, pit production has been established within the Plutonium Facility Complex at Los Alamos’s Technical Area 55, and certified pits have been produced over the past 5 years in that facility. A particularly important facility at Los Alamos within Technical Area 55 is the nearly 60-year-old Chemistry and Metallurgy Research facility. The facility has unique capabilities for performing analytical chemistry, material characterization, and research and development related to plutonium. This includes activities that support the manufacturing, development, and surveillance of nuclear weapons pits; programs to extend the life of nuclear weapons in the stockpile; and nuclear weapon dismantlement efforts. This pit production mission support work was first assigned to Los Alamos in 1996. NNSA also currently maintains some plutonium-related research capabilities at other facilities, such as Livermore’s Superblock facility. These capabilities are necessary components of NNSA’s overall stockpile management strategy. NNSA and DOE also use the unique plutonium-related capabilities located at Los Alamos and Livermore to support the plutonium-related research needs of other national security missions and activities outside of the nuclear weapons stockpile work, including nuclear nonproliferation activities; homeland security activities, such as nuclear forensics and nuclear counterterrorism; waste management; and material recycle and recovery programs. The Chemistry and Metallurgy Research facility was initially designed and constructed to comply with building codes in effect during the late 1940s and early 1950s. In 1992, recognizing that some of the utility systems and structural components were aging, outmoded, and generally deteriorating, DOE began upgrading the facility. These upgrades addressed specific safety, reliability, consolidation, and security issues with the intent of extending the useful life of the facility for an additional 20 to 30 years. However, beginning in about 1997 and continuing to the present, a series of additional operational and safety concerns have surfaced. In particular, a 1998 seismic study identified two small parallel faults beneath the northern portion of the Chemistry and Metallurgy Research facility. The presence of these faults raised concerns about the structural integrity of the building in the event of an earthquake. DOE and NNSA determined that, over the long term, Los Alamos could not continue to operate the mission-critical support capabilities in the existing Chemistry and Metallurgy Research facility at an acceptable level of risk to worker safety and health. To ensure that NNSA can fulfill its national security mission for the next 50 years in a safe, secure, and environmentally sound manner, NNSA decided in 2004 to construct a replacement facility, known as the CMRR. Federal agencies, including DOE and NNSA, have experienced long- standing difficulties in completing major projects within cost and on schedule. To provide assistance in preparing high-quality cost and schedule estimates, we compiled best practices used throughout government and industry and, in March 2009, issued a guide outlining the criteria for high-quality cost and schedule estimates. Specifically, our guide identified four characteristics of a high-quality, reliable cost estimate: (1) credible, (2) well-documented, (3) accurate, and (4) comprehensive.should result in high-quality cost estimates and hundreds of best practices drawn from across industry and government for carrying out these steps. For example, one of the key steps includes conducting an independent cost estimate––that is, one generated by an entity that has no stake in the approval of the project but uses the same detailed technical information as the project estimate. Having an independent entity perform such a cost estimate and comparing it to the project team’s estimate provides an unbiased test of whether the project team’s cost estimate is reasonable. In addition, our cost guide lays out 12 key steps that Our guide also identified nine best practices for effectively estimating schedules: (1) capturing key activities, (2) sequencing key activities, (3) assigning resources to key activities, (4) establishing the duration of key activities, (5) integrating key activities horizontally and vertically, (6) establishing the critical path for key activities, (7) identifying total float (i.e., the time that activities can slip before the delay affects the completion date), (8) performing a risk analysis of the schedule, and (9) updating the schedule using logic and durations to determine dates. Many of these practices have also been incorporated into DOE’s recent guidance for establishing performance baselines. GAO-09-3SP. The estimated cost to construct the CMRR, according to estimates prepared in April 2010, is nearly six times higher than the project’s initial cost estimate that was prepared in 2005. The project’s estimated completion date has also been delayed by at least 8 to 12 years. Our review of these most recent detailed cost and schedule estimates for the CMRR project found that the estimates generally reflect best practices, but are not yet entirely reliable. Since CMRR was first proposed, its costs have risen significantly, and its schedule has been repeatedly delayed. Specifically, in 2005, when DOE developed initial plans for CMRR, it estimated that the project would cost from $745 million to $975 million and would be completed between 2013 and 2017. This estimate was prepared using preliminary information— before a detailed project design was substantially under way—and was therefore considered by DOE to be a rough estimate. In April 2010, NNSA estimated that the CMRR will cost between $3.7 and $5.8 billion—a nearly six-fold increase from the initial estimate—and that construction will be complete by 2020—a 3- to 7-year delay. In February 2012, after we had provided NNSA with a draft of this report for its comments, NNSA announced that it had decided to defer CMRR construction by at least an additional 5 years, bringing the total delay from NNSA’s original plans to 8 to 12 years. NNSA officials explained that the majority of the cost increases occurred because of changes to the facility’s design and because of project delays. Specifically, Modifications to the facility’s design. To address concerns about seismic activity, the project design was modified to strengthen the facility to withstand a potential earthquake. For example, significant design changes resulted from the need to thicken the concrete walls to satisfy increasingly stringent seismic requirements. In addition, to proceed to final design, project officials had to evaluate the potential effects of an earthquake on the facility’s complex ventilation system. This effort included several studies, consultations with vendors and other designers, and an assessment of the availability of equipment that would meet seismic requirements. Overall, Los Alamos estimates the seismic related design changes increased the project costs by almost $500 million. Delays in the construction start date and longer overall project duration. CMRR construction was originally expected to begin in 2008, but was first delayed until 2013 and is now not expected to begin before 2018. The initial delay in starting construction from 2008 to 2013 had varying causes, including facility design changes described previously as well as the additional time needed for NNSA to determine where and how to consolidate plutonium operations in the nuclear security enterprise, according to project officials. This delay starting construction pushed the estimated construction completion date from between 2013 and 2017 to 2020—3 to 7 years later than initially expected. At the time, the facility was expected to be operational in 2022. These delays further increased costs, partly because inflation meant that equipment and materials became more expensive as time passed. In addition, the longer project duration also contributes to increases in the cost of workers’ wages and salaries. Overall, project officials estimate that about $1.2 billion in additional costs resulted from these schedule delays. In February 2012, NNSA announced another significant project delay—at least an additional 5- year deferral in starting the construction of the CMRR—resulting in a total of an 8 to 12 year delay from NNSA’s original plans. However, NNSA has not yet determined the impact to the project’s costs as a result of this additional delay. Our review of NNSA’s most recent cost and schedule estimates for the CMRR construction project found that the estimates were generally well prepared but that important weaknesses remain. Specifically, we found that the CMRR cost estimate prepared in April 2010 exhibits most of the characteristics of high-quality, reliable cost estimates. As identified by the professional cost-estimating community and documented in our cost- estimating guide, a high-quality cost estimate is comprehensive, well- documented, accurate, and credible. Our review of the CMRR cost estimate found that the cost estimate exhibits three of the four characteristics of a high-quality estimate by being substantially comprehensive, well documented, and accurate, but only partially credible, as shown in table 1. Appendix II contains additional information about each of the four general best practice characteristics and our assessment of the estimate compared to detailed best practices. The CMRR cost estimate only partially met industry best practices for credibility because project officials did not use alternate methods to crosscheck major cost elements to see whether the results were similar under different estimating methods. In addition, according to our guide, there are varying methods of validating an estimate, but the most rigorous method is the independent cost estimate that is generated by an entity that has no stake in the approval of the project. Conducting an independent cost estimate is especially important at major milestones because it provides senior decision makers with a more objective assessment of the likely cost of a project. A second, less rigorous method for validating a project’s cost estimate—an independent cost review— focuses on examining the estimate’s supporting documentation and interviewing relevant staff. Independent cost reviews address only the cost estimate’s high-value, high-risk, and high-interest aspects without evaluating the remainder of the estimate. An independent cost review on the entire CMRR project was initiated in 2011, but the more rigorous method of validating—conducting an independent cost estimate—has only been used on a small portion of the project representing about 6 percent of the project’s total costs. According to NNSA officials, DOE orders do not require NNSA to seek an independent cost estimate until just prior to establishing the project baseline, and project officials told us NNSA is preparing to have one conducted before the project baseline is established. However, until a quality independent cost estimate is completed on the entire project or another means of validating the estimate for the project, DOE and NNSA officials cannot be confident that the current cost estimate is completely credible. With regard to CMRR’s schedule, the project’s schedule estimate fully met two and substantially met six out of nine best practices for a high- quality schedule as identified by our guide and minimally met one. For example, two of the best practices the estimate fully met concerned how well it (1) captured all of the project’s activities, including design, construction, and other tasks that collectively form a comprehensive schedule, and (2) is successfully kept up-to-date. Table 2 lists best practices along with our assessment of the extent to which the project’s schedule met each best practice. The CMRR schedule estimate minimally met industry best practices for conducting a schedule risk analysis. Namely, according to our guide, a high-quality schedule requires a schedule risk analysis that uses already identified risks, among other things, to predict the level of confidence in meeting a project’s completion date and the amount of contingency time needed to cover unexpected delays. CMRR project officials identified and documented hundreds of risks to the project, but these risks were not used in preparing a schedule risk analysis. For example, project officials identified the following three risks that are likely to occur: (1) a necessary electrical system upgrade that might not be completed in time for construction activities, (2) uncertainties associated with the flow of simultaneous design changes, and (3) noncompliance with certain quality assurance standards for nuclear facilities. These risks could cause delays, ranging anywhere from 1 to 5 years. Nevertheless, the project’s schedule risk analysis identified only a 1-year schedule contingency for the entire project. If NNSA is unable to successfully mitigate these risks and if they occur together, there is a high likelihood that the 1-year contingency that NNSA established may be exceeded. As a result, project officials cannot be certain the schedule estimate contains all identified risks in its risk analysis. Project officials told us that, before the project baseline is established, they expect to have a schedule risk analysis that includes identified risks and that they are in the early stages of developing a plan to do so. NNSA is taking steps to mitigate the risks that have been identified and, because the project is still in early stages, many risks may be resolved. For example, to mitigate the risk that the electrical system upgrade would not be completed in time to avoid a delay in construction activities, project officials have identified specific steps to help ensure that the upgrade is performed in a timely manner. However, without a schedule risk analysis that contains risks identified by CMRR project officials, NNSA cannot be fully confident, once it decides to resume CMRR construction plans, that sufficient schedule contingency is established to ensure that the project will be completed on time and within estimated costs. As a result, overall project costs could potentially exceed NNSA’s April 2010 estimate of between $3.7 billion and $5.8 billion and NNSA had not yet determined the impact to the project’s costs of its recent decision to defer CMRR construction for at least 5 years. Appendix III contains additional information on each practice and our assessment of the estimate compared to best practices. To replace the plutonium-related research capabilities in Los Alamos’s deteriorating Chemistry and Metallurgy Research facility, NNSA considered several options. In the end, NNSA decided to build a minimally sized CMRR facility at Los Alamos with a broad suite of capabilities to meet nuclear weapons stockpile needs over the long-term. These capabilities would also be used to support plutonium-related research needs of other departmental missions. NNSA evaluated these options based on their expected effect on cost, schedule, risk, and ability to meet the plutonium-related research needs of the nuclear weapons stockpile stewardship program. NNSA first focused on identifying and replacing the capabilities necessary to maintain and modernize the nuclear weapons stockpile. Specifically, these capabilities included those necessary to study the chemical and metallurgic properties of plutonium pits to ensure that they are properly produced, certified, and monitored over time so they remain safe and reliable. For example, to ensure that a nuclear weapon will function as intended, the plutonium inside of the pits needs to meet strict specifications. Meeting these specifications requires having the capability to analyze and characterize the plutonium’s chemistry and material properties. The specifications require NNSA to measure several chemical attributes, including chemical composition and impurities, as well as the pit’s structural attributes, such as the metal’s microscopic grain size, its texture, any potential defects, and its weld characteristics. NNSA identified at least 58 distinct capabilities that will be required in the new facility to allow it to conduct the analyses necessary to build at least one pit of every type currently in the stockpile.many as 79 capabilities may be required if NNSA needs to manufacture a larger quantity of pits—up to its high estimate of 80 pits per year, which is the Department of Defense’s published military requirement for pit production. In addition to research capabilities, NNSA determined that the new facility would need to provide other capabilities to support research operations. In particular, long-term plutonium storage space is needed to support plutonium-related research at CMRR. To house these needed capabilities, NNSA assessed three potential sizes for a new facility—22,500 square feet, 31,500 square feet, and 40,500 square feet. The 40,500 square foot option included about 10,500 square feet of unequipped space—known as contingency space—to allow for program changes, such as increased pit manufacturing. In addition, this contingency space could accommodate users outside Los Alamos, such as researchers from Livermore. However, in 2004, NNSA chose the smallest and least expensive option—22,500 square feet. NNSA officials told us that cost was the primary driver of this decision. NNSA’s choice to build a minimally sized facility was questioned in two studies conducted subsequent to NNSA’s decision in 2004. Specifically, a Los Alamos study conducted in 2006 found that increasing CMRR’s size by 9,000 square feet—to a total of 31,500 square feet—would be the best option based on cost, schedule, risk, and the facility’s ability to meet plutonium-related research needs.independent study prepared for NNSA in 2006 determined that adding 9,000 square feet to CMRR would lower risk and increase facility flexibility Furthermore, a separate but could cost an additional $179 million.told us that a smaller sized facility had the best chance of minimizing costs. NNSA officials acknowledge that the smaller size option poses more risk because the facility will include no contingency space. This space may be necessary, for example, to respond to potential increases in pit production needs if in the future they unexpectedly approach or exceed 80 pits per year. If this occurs, and no contingency space is available, other plutonium-related research beyond that required for the nuclear weapons stockpile will also likely be affected. According to NNSA and Los Alamos officials, these risks could be mitigated by conducting some nonnuclear weapons plutonium-related research at other facilities, such as Los Alamos’s PF-4 pit production facility. However, PF-4 also has ongoing laboratory and storage limitations and may not be able to accommodate these other nonweapons plutonium activities. Subsequent to its 2004 decision to build CMRR at Los Alamos, NNSA continued to study other locations for consolidating plutonium-related research within the nuclear security enterprise. Specifically, as part of its development of a complexwide strategy to modernize nuclear research, development, and production facilities that support the nuclear weapons stockpile, NNSA studied consolidating the nation’s plutonium-related research capabilities at Los Alamos, the Pantex Plant in Texas, the Nevada National Security Site in Nevada, the Savannah River Site in South Carolina, and the Y-12 National Security Complex in Tennessee. In December 2008, NNSA decided to consolidate plutonium research at Los Alamos and reaffirmed its earlier 2004 decision to locate the new CMRR at Los Alamos. Consolidating plutonium-related research capabilities at Los Alamos presented several advantages, including lower costs and risks when compared to other locations. For example, colocating plutonium analytical capabilities with Los Alamos’s pit manufacturing capabilities reduced the costs and risks of protecting plutonium from potential theft. As part of NNSA’s decision to consolidate plutonium research at Los Alamos, NNSA also decided that the CMRR would be used to support plutonium-related research needs of other non-weapons activities, including nuclear nonproliferation activities; homeland security activities, such as nuclear forensics and nuclear counterterrorism; waste management; and material recycle and recovery programs. However, the size of the planned CMRR facility—22,500 square feet—has not changed since NNSA’s initial 2004 decision, which calls into question the facility’s ability to support the needs of these other activities. NNSA’s plans to construct the CMRR focused on meeting changing nuclear weapons stockpile requirements. However, CMRR may not be able to accommodate all stockpile and other plutonium-related research needs, particularly as other NNSA facilities reduce or end their plutonium research activities as a result of broader NNSA plans to consolidate its plutonium activities. NNSA’s plans to construct the CMRR primarily focus on maintaining plutonium-related research capabilities that are necessary for meeting nuclear weapons stockpile requirements. NNSA designed the CMRR to support the capabilities necessary for maintaining the safety and reliability of the nuclear stockpile––namely, the testing, manufacturing, and certification of the pits––and, in particular, plutonium-related research capabilities, such as analytical chemistry and materials characterization, and associated special nuclear materials vault storage. More specifically, in designing the CMRR, NNSA analyzed detailed data on past nuclear weapons activities conducted at Los Alamos, including information on the frequency of plutonium samples analyzed over time and the expected annual requirement for manufacturing new pits to determine the plutonium-related research capabilities the new facility would need to meet NNSA weapons program requirements. For example, NNSA studied the number of plutonium samples that had been processed in 2007 at the old Chemistry and Metallurgy Research facility for analytical chemistry and materials characterization work and used the number as an average representation in assuming future workloads. In addition, NNSA considered the numbers of specific pieces of equipment and the associated square footage of laboratory space needed to conduct specific analytical chemistry and material characterization work. In its planning, NNSA considered how plutonium-related capabilities in the CMRR could meet changing stockpile requirements, including NNSA’s established upper limit of producing 80 pits per year. NNSA designed the facility to ensure that it can meet the pit production requirements regardless of the specific number of pits produced—or, in other words, the number of pits produced each year will not significantly affect the capabilities NNSA will need in the new facility, although capacity limits cap the quantity of new pits at 80 pits per year. For example, NNSA’s 2009 CMRR Program Requirements document states that the new facility will have laboratory spaces designed in a way that is flexible and modular to accommodate changes in the mission and the dynamic conditions associated with normal processing and maintenance activities in a laboratory environment. NNSA officials indicated that they are confident that the CMRR will generally meet nuclear weapons activities needs and accommodate changes in the nuclear weapons stockpile requirements, including the ability to produce up to 80 pits per year. However, some weapons activities capabilities that currently exist at other NNSA sites may no longer be available to the nuclear security enterprise because of broader NNSA modernization plans to consolidate plutonium activities. As part of NNSA’s plan to consolidate plutonium related work at Los Alamos, the CMRR was designed to absorb some plutonium-related research from other facilities as those other facilities reduce or end their weapons activities work. For example, Livermore’s Superblock facility is equipped with the necessary systems to safely work with plutonium and to support extending the life of certain warheads in the nuclear weapons stockpile. Under NNSA’s strategy to consolidate plutonium work at Los Alamos, the majority of Livermore’s plutonium is scheduled to be removed in 2012, and some of this research will be discontinued at Superblock. NNSA plans to have the CMRR take on much of this work; however, Livermore officials told us they believe that NNSA may still lose some plutonium- related capabilities once some research is discontinued at Superblock. For example, NNSA may face a gap in the plutonium-related capabilities necessary to help improve nuclear warhead surety—that is, safety, security, and use control. NNSA has not planned for another facility to take over this work, and NNSA officials told us that the CMRR has not been designed to support this surety research. Furthermore, NNSA and Los Alamos officials told us that NNSA may also lose some pit testing capabilities that only take place in the Superblock at Livermore and are expected to be discontinued there in 2013. Pit testing includes thermal, vibration, and other environmental tests on pits that ensure that the weapon can successfully function from the time it is in the stockpile until it is deployed and reaches a target. Livermore officials told us that CMRR will not accommodate pit environmental testing because the systems used to conduct the environmental tests could cause vibrations through the rest of the facility. This could disrupt other work that requires precision instrumentation. Livermore officials also told us that these pit environmental testing capabilities are necessary to help meet nuclear weapons stockpile requirements. Because the CMRR was not intended to support all of these capabilities, NNSA will need to find another location if this plutonium-related work currently being conducted at Livermore is to be continued. NNSA has begun studying the extent to which the environmental pit testing capabilities will be needed, and if so, where they will be located. However, NNSA currently has no final plans for relocating them elsewhere. DOE and NNSA conduct important plutonium-related research in other mission areas outside of nuclear weapons stockpile work, and it is unclear whether the CMRR as designed will be large enough to accommodate these nonweapons activities because they have not comprehensively studied their long-term research and storage needs. A NNSA record of decision states that the CMRR will support other national security missions involving plutonium-related research, including nonproliferation, nuclear forensics, and nuclear counterterrorism programs. For example, NNSA plans to use analytical chemistry capabilities in CMRR to perform nuclear forensics work that would be needed to, among other things, identify the source of and individuals responsible for any planned or actual use of a nuclear device. However, DOE and NNSA have not comprehensively studied the long- term plutonium-related research and storage needs of programs outside of NNSA’s nuclear weapons stockpile work and therefore cannot be sure that the CMRR can accommodate them. In particular, DOE does not have important information on departmentwide analytical chemistry and material characterization research and storage needs, which can be helpful in making fully informed planning decisions about its long-term infrastructure and consolidation plans for the nuclear security enterprise. As we have previously reported, conceptual planning for a building—a process by which an organization’s facility needs are identified and understood—is the critical phase of any successful building project development. This conceptual planning results in a building design that should be well defined according to an organization’s needs and include input from all key stakeholders before it is designed. NNSA and Los Alamos officials told us that the programs supporting mission areas outside of the nuclear stockpile work—including NNSA’s Office of National Technical Nuclear Forensics and Office of Fissile Materials Disposition—were generally not involved in planning the CMRR. Los Alamos officials said that they thought that there was too much time before the new facility would be operating for other mission areas to know their specific needs. However, by not including input from all the mission areas during the design of CMRR, NNSA has risked not knowing all of the potential needs and uses for the new facility to complement its important missions outside of the nuclear weapons stockpile work. NNSA and Los Alamos have considered using space in Los Alamos’ PF-4 plutonium facility to handle additional plutonium-related research. However, NNSA officials told us that operating at this high pit production range would also likely use all of PF-4’s capacity. As a result, NNSA would have to consider reducing or eliminating other mission work currently supported in PF-4 or modify CMRR to incorporate additional needed space at additional cost. support nonweapons activity needs only if additional capacity remains after all weapons-related activities are supported. If additional capacity is not available, NNSA may face the prospect of not being able to use the new facility for one of its intended purposes of supporting certain plutonium-related research for missions outside of nuclear weapons stockpile work. A 2004 NNSA study suggested that this could effectively result in national security, nonproliferation, and environmental management programs potentially not performing in a cost-effective, compliant, and timely manner. In addition, the CMRR has been designed to support Los Alamos and NNSA’s mission need to store significant quantities of nuclear material associated with the plutonium operations in a safe and secure manner using vault storage. Specifically, NNSA plans to shift all of Los Alamos’ current vault storage materials from its existing chemistry and metallurgy facility and overflow inventory from the PF-4 facility to the CMRR. However, Los Alamos officials told us that Los Alamos may not have enough storage space even after the CMRR is complete. NNSA plans to first use the newly available vault space in the CMRR for short-term, daily storage of nuclear materials being used for programmatic work and then use any remaining space for long-term storage. NNSA designed the CMRR without much long-term vault storage because these materials were initially planned to be shipped offsite for disposal. However, due to broader departmental challenges with other NNSA sites receiving materials for disposal, Los Alamos may not be able to ship its nuclear material off-site. If this is the case, Los Alamos officials told us that they may have to find additional long-term vault storage. This could also potentially affect Los Alamos’ ability to receive nuclear materials from other sites under NNSA’s consolidation strategy. In addition, Los Alamos officials told us that NNSA is still considering facility layout options that would allow for vault storage space to be configured for other operations and lab space. If this space is used for functional laboratory space rather than storage, less space will be available for short-term vault storage than NNSA originally thought. Los Alamos officials told us that one of the major uses of CMRR storage space will be to relieve vault storage space at its plutonium facility that has already reached its available storage capacity. Once NNSA resumes the CMRR project and constructs the facility, CMRR will play an important role in ensuring the continued safety and reliability of the U.S. nuclear weapons stockpile. The CMRR can potentially offer NNSA the opportunity to improve efficiency, save costs, and reduce safety hazards for workers. Because of the facility’s importance to the stockpile, multibillion dollar price tag, the inherent challenges in building facilities that can safely and securely store plutonium, and NNSA’s ongoing difficulties managing large projects, it is critical that NNSA and Congress have accurate estimates of the project’s costs and schedules, particularly when the CMRR project is resumed. After facing a nearly six-fold increase in estimated cost and schedule delays, NNSA’s most recent cost and schedule estimates generally meet industry best practices, but there are important weaknesses that call these estimates’ reliability into question. For example, an independent cost estimate—the most rigorous method to validate major cost elements that is performed by an entity that has no stake in the approval of the project—has not yet been conducted. To its credit, NNSA plans to have an independent cost estimate conducted prior to the completion of CMRR’s project baseline once the project is resumed. With regard to the project’s schedule estimate, however, NNSA cannot yet provide high assurance that all project risks are fully accounted for in the project’s schedule risk analysis that is used for updating the project’s schedule contingency estimates. As a result, NNSA cannot yet be fully confident that, once it decides to resume the CMRR project, the project will meet its estimated completion date, which could lead to further delays and additional costs. However, reliable cost and schedule estimates for CMRR that fully meet industry best practices are of little use if DOE’s and NNSA’s mission needs are not met. Constructing CMRR is an important part of NNSA’s strategy to modernize its nuclear weapons facilities into a smaller and more responsive, efficient, and secure infrastructure to meet the changing requirements of the nuclear weapons stockpile. The CMRR was intended to support the plutonium-related research and storage needs of other DOE and NNSA national security missions and activities outside of the nuclear weapons stockpile work, including homeland security and nuclear nonproliferation activities; but because NNSA decided early in the project to reduce the size of the proposed facility to save money, CMRR may now lack the ability to accommodate these other research needs. In particular, the planned removal of most plutonium from Livermore presents NNSA with a dilemma in that the primary benefit of consolidating plutonium at Los Alamos—lower security costs—may be offset by the need to replace Lawrence Livermore National Laboratory’s plutonium research, storage, and environmental testing capabilities. Importantly, when NNSA decided to consolidate plutonium operations at Los Alamos, it did not fully consider whether planned or existing facilities at Los Alamos would be capable of continuing plutonium work being conducted elsewhere. For example, CMRR was not intended to accommodate the thermal, vibration, and other environmental pit testing that Livermore currently conducts because the vibrations this type of testing creates could disrupt other work at CMRR that requires precision instrumentation. Nevertheless, this type of testing is necessary to meet nuclear weapons stockpile requirements and so must be conducted somewhere. The full extent of the potential shortfall in plutonium research capabilities is not well-understood because DOE and NNSA have not comprehensively assessed their plutonium-related research, storage, and environmental testing needs. Plutonium research for the nuclear weapons stockpile and for other missions may have to compete for limited laboratory and storage space in CMRR and other facilities at Los Alamos, especially if the demand for newly manufactured pits unexpectedly increases. As a result, expansion of CMRR or construction of costly additional plutonium research, storage, and testing facilities at Los Alamos or elsewhere may be needed sometime in the future. To strengthen cost and schedule estimates for the CMRR and ensure needed plutonium research needs are sufficiently accommodated, we recommend that the Secretary of Energy take the following three actions: 1. Once NNSA resumes the CMRR project and prior to establishing a new cost and schedule baseline, incorporate all key risks identified by CMRR project officials into the project’s schedule risk analysis, and ensure that this information is then used to update schedule contingency estimates, as appropriate. 2. Conduct a comprehensive assessment of needed plutonium-related research, storage, and environmental testing needs for nuclear weapons stockpile activities as well as other missions currently conducted at other NNSA and DOE facilities, with particular emphasis on mitigating the consequences associated with eliminating plutonium research, storage, and environmental testing capabilities from NNSA’s Lawrence Livermore National Laboratory. 3. Using the results of this assessment, report to Congress detailing any modifications to existing or planned facilities or any new facilities that will be needed to support plutonium-related research, storage, and environmental testing needs for nuclear weapons stockpile activities as well as other missions conducted by NNSA and DOE. We provided NNSA with a draft of this report for its review and comment. In its written comments, reproduced in appendix IV, NNSA generally agreed with our recommendations to conduct a comprehensive assessment of needed plutonium-related research, storage, and environmental testing needs and to report to Congress on any modifications to existing or planned facilities or any new facilities that will be needed to support these needs. However, NNSA disagreed with our recommendation to incorporate all key risks identified by project officials into the project’s schedule risk analysis. Specifically, NNSA stated that, subsequent to receiving our draft report for its comments, the President’s budget request for fiscal year 2013 was released and resulted in several changes to the funding and execution of the CMRR project. In particular, construction of the CMRR is now to be deferred for at least 5 years. Therefore, NNSA stated that it is conducting additional analysis to determine the most effective way to provide analytical chemistry, materials characterization, and storage capabilities that were originally intended for the CMRR through the use of existing infrastructure. As part of this analysis, NNSA stated that it will evaluate options to use existing facilities at other sites. We believe this is consistent with our recommendation that NNSA conduct a comprehensive assessment of needed plutonium-related research, storage, and environmental testing needs and that NNSA’s decision to defer construction of the CMRR will give it sufficient time to conduct this assessment. NNSA also commented that it will continue to work with Congress and other stakeholders as it adjusts its plutonium strategy. In our view, this is also consistent with our recommendation to report to Congress on any modifications to existing or planned facilities or any new facilities that will be needed to support plutonium-related research, storage, and environmental testing needs for nuclear weapons stockpile activities as well as other missions conducted by NNSA and DOE. With regard to our recommendation to incorporate all key risks identified by CMRR project officials into the project’s schedule risk analysis, NNSA commented that spending project money to update the CMRR project’s schedule would not be prudent because of the construction delay. Therefore, NNSA disagreed with the recommendation. NNSA stated that its efforts in the near term would be focused on closing out the current design and that any future efforts will require updated cost and schedule estimates. We agree with NNSA that it is not necessary to update the project’s schedule at this time because of the recently announced construction delay; however, we maintain that it is important that all project risks are fully accounted for in the CMRR’s schedule once the project is resumed. Therefore, we clarified our recommendation to specify that NNSA should take action to ensure that the CMRR’s schedule risk analysis is appropriately revised to account for all project risks when NNSA resumes the project and before it establishes a new cost and schedule baseline. We are sending copies of this report to the Secretary of Energy; the Administrator of NNSA; the Director, Office of Management and Budget; the appropriate congressional committees; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to examine (1) changes in the cost and schedule estimates for the construction of the facility and the extent to which its most recent estimates reflect best practices, (2) options the National Nuclear Security Administration (NNSA) considered to ensure that plutonium-related research activities could continue as needed, and (3) the extent to which NNSA’s plans to construct the Chemistry and Metallurgy Research Replacement Nuclear Facility (CMRR) and its consideration of options reflected changes in nuclear weapons stockpile requirements and other plutonium-related research needs. To examine the project’s cost and schedule estimates and the extent to which its current estimates reflect best practices, we reviewed relevant NNSA documents and met with agency and contractor officials on the changes that have occurred to date and the reasons for them. We compared NNSA’s most recent detailed cost and schedule estimates with industry best practices contained in our cost estimating and assessment guide and discussed them with project officials to give them the opportunity to provide feedback on our assessment. Our review examined specifically those NNSA cost estimates that were prepared in April 2010 and schedule estimates, which at the time of our review were updated as of May 2011 or more recent for some portions of the schedule. As such, the cost and schedule estimates we reviewed do not reflect NNSA’s 5- year construction deferral recently announced in February 2012 and NNSA has not yet determined the potential long-term cost impact of this delay. To examine the options NNSA considered to continue plutonium-related analytical work, we reviewed NNSA and contractor documents on plutonium research needs and the various options available to meet those needs. We also met with NNSA and contractor officials to better understand how these options were analyzed to determine the best approach to fulfill NNSA’s mission. While NNSA evaluated options on how to best meet its mission needs, it may have also evaluated alternatives based on the environmental impact of building the CMRR. As such, our review examined the options NNSA assessed to maintain the capabilities for plutonium-related analytical chemistry, material characterization, and storage and did not address NNSA’s compliance with requirements of the National Environmental Policy Act. We also met with NNSA and contractor officials to gain a better understanding of how these options were analyzed to determine the best approach to fulfill NNSA’s mission. To determine the extent to which NNSA’s plans reflect changes in nuclear weapons stockpile requirements, we reviewed NNSA analyses that were used to support CMRR project decisions and met with NNSA officials to determine if these analyses were comprehensive and reflected up-to-date nuclear weapons stockpile requirements. We also visited Los Alamos and Lawrence Livermore National Laboratories. To ensure the data we used were sufficiently reliable, we compared information gathered from a variety of data sources. For example, we interviewed officials from both Los Alamos and Lawrence Livermore National Laboratories to obtain separate and independent perspectives on CMRR project plans. We determined the data were sufficiently reliable for our purposes. We conducted this performance audit from February 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Detailed best practice The cost estimate includes all life cycle costs. Detailed assessment Substantially met. The total project cost for the construction of the Nuclear Facility is $4.2 billion. Government and contractor costs are included. However, operations and retirement costs are not included. These costs were not included because there was no mandate to estimate them. The cost estimate spans from start of construction in June 2010 to completion in 2020 with a schedule contingency through 2022. The cost estimate completely defines the program, reflects the current schedule, and is technically reasonable. Fully met. Technical descriptions were provided in multiple documents such as the “CMRR Nuclear Facility (NF) Estimate at Complete Forecast–April 2010,” the Los Alamos CMRR Mission Need Statement, the Program Requirements Documents, the WBS dictionary, and the “Final Environmental Impact Statement for the Chemistry and Metallurgy Project.” The cost estimate work breakdown structure is product-oriented, traceable to the statement of work/objective, and at an appropriate level of detail to ensure that cost elements are neither omitted nor double-counted. Partially met. The work breakdown structure and work breakdown structure dictionary are product oriented and the work breakdown structure flows down to level 4 of the program, project, or task. A statement of work was provided in the form of a mission need statement; however, it is not easily reconciled with the work breakdown structure dictionary. The estimate documents all cost- influencing ground rules and assumptions. Fully met. Cost influencing ground rules and assumptions can be found in the CMRR Estimate Update Execution Plan. Budget constraints and escalation are addressed. A list of high-level risk drivers along with the handling costs and risk input information was provided. Exclusions to the cost estimate are noted in the documents. The documentation captures the source data used, the reliability of the data, and how the data were normalized. Partially met. The data was analyzed and high-level cost drivers have been addressed as well as unit rates and quantities. Source data used to develop the estimate were found. The cost estimate was based on historical data from other Department of Energy (DOE) sites and the data was normalized. However, the independent review team found inconsistencies and discrepancies of quantities (hours) and costs. In addition, the review team reported that even though the basis of estimate referred to current contract awards or proposals, no reference was made to specific contracts or proposals by date and number. Detailed best practice The documentation describes in sufficient detail the calculations performed and the estimating methodology used to derive each element’s cost. Detailed assessment Substantially met. While not explicitly stating what methodology was used, the pricing approach summary indicates that the estimate was developed using a combination of the build-up method and extrapolation from pricing information and productivity rates from other DOE sites. However the calculations involved were not clearly shown. The documentation describes, step by step, how the estimate was developed so that a cost analyst unfamiliar with the program could understand what was done and replicate it. Substantially met. The documentation for the estimate contains a summary narrative about the project as well as high-level cost summaries. The documentation discusses risk and contingency reserve. However, it does not address sensitivity although a sensitivity analysis was performed. Narrative on how the sensitivity analysis was conducted was not provided. The documentation discusses the technical baseline description and the data in the baseline is consistent with the estimate. Substantially met. There are technical descriptions discussed in the documentation that are consistent with the basis of estimate and the work outlined in the detail cost estimate spreadsheets. However, we are unable to map specific technical descriptions as outlined in the requirements document to cost elements in the high-level or detailed cost estimates. During the site visit, project officials showed us how the scope of work in the work breakdown structure dictionary was written in a way to illustrate how the scope of work was captured. The documentation provides evidence that the cost estimate was reviewed and accepted by management. Partially met. Los Alamos policy states that reviews shall be performed. According to project officials, these reviews typically include an integrated project team review, functional manager review, directorate review, and in the case of projects of high complexity or risk, an external corporate review and/or DOE Los Alamos Site Office review. A CMRR functional review was held March 12, 2010, and the review of the current estimate was listed on the meeting agenda. However, without further documentation we are unable to determine whether or not a briefing was given to management that clearly explains the detail of the cost estimate— including presentation of lifecycle costs, ground rules and assumptions, estimating methods and data sources as they relate to each work breakdown structure element, results of sensitivity analysis, risk and uncertainty analysis, and if a desired level of confidence was reached. Additionally, it is not clear that an affordability analysis, contingency reserve, conclusions, or recommendations were discussed with management. The documentation also does not show management’s acceptance of the cost estimate. Detailed best practice The cost estimate results are unbiased, not overly conservative or optimistic, and based on an assessment of most likely costs. Detailed assessment Substantially met. Risk and uncertainty analyses were performed providing an 84 percent confidence level. There are three components that contribute to the total contingency value established for the project—schedule, estimate, and technical and programmatic risk analysis. The estimate has been adjusted properly for inflation. Substantially met. The documentation contained information on escalation rates. However, it is unclear how the cost estimate data were normalized. For example, costs are listed but are not labeled as constant or then-year dollars. Detailed calculations on how escalation was applied to the cost estimate are not documented. The estimate contains few, if any, minor mistakes. Substantially met. The numbers shown in the estimate at complete document and the cost estimate spreadsheet are accurate and the independent review team found only one minor mistake in their review of the estimate. However, we were not provided access to the detailed calculations behind the spreadsheet to check that the estimate was calculated correctly. The cost estimate is regularly updated to reflect significant changes in the program so that it always reflects current status. Substantially met. The CMRR Project Control Plan outlines a formal change control process that is to be executed in accordance with the Los Alamos Project Management and Site Services Directorate as well as the CMRR Baseline Change Control Board. These documents provide an approach to document, communicate, and approve potential changes to scope, cost, and schedule, and they provide the basis for incorporating changes into the project baseline and/or the forecast estimate at completion. These documents also describe the activities and responsibilities for making changes to the baseline. Any variances between planned and actual costs are documented, explained, and reviewed. Substantially met. Earned value is entered for each work package based on the earned value method indicated for that work package. Progress is reported in terms of percent complete by work package and is verified, analyzed, and reported to the project controls team. This information is then analyzed by the project controls team and control account managers and reviewed with CMRR management as the final reports are completed and published. However, there is no evidence of the cost estimate being updated to capture variances from the earned value system. Detailed best practice The estimate is based on a historical record of cost estimating and actual experiences from other comparable programs. Detailed assessment Substantially met. Part of the estimate was developed using the engineering build up method which includes historical data from other DOE/NNSA sites (Waste Treatment Plant, Mixed Oxide Fuel Fabrication Facility, and two chemical demilitarization facilities). The reliability of the data is documented where confidence levels associated with quantity, productivity, labor, and nonlabor pricing are addressed. However, for some of the data, the sources were not provided and there was no evidence that earned value data was used to develop or update the estimate. The cost estimate includes a sensitivity analysis—a technique that identifies a range of possible costs based on varying major assumptions, parameters, and data inputs. Substantially met. CMRR conducted some sort of sensitivity analysis. No documentation was given providing a narrative on how the sensitivity analysis was conducted—including whether high percentages of cost were determined and how their parameters and assumptions were examined. Additionally, it cannot be determined whether the outcomes were evaluated for parameters most sensitive to change or how this analysis was applied to the estimate. However, during a site visit, Los Alamos officials provided a copy of a report that shows how a sensitivity analysis was applied to the nuclear facility cost estimate. For this assessment, a high and low range was determined. Some of the factors that were varied included overhead and General and Administrative rates, and escalation. Detailed best practice A risk and uncertainty analysis was conducted that quantified the imperfectly understood risks and identified the effects of changing key cost driver assumptions and factors. Detailed assessment Substantially met. The cost estimate includes contingency costs for schedule ($99 million), cost estimate ($508 million) and technical and programmatic risks ($404 million). While a schedule risk analysis was performed that identified $99 million in schedule contingency, it is not clear how this analysis was done as no supporting documentation was provided. An independent review team assessed the schedule risk analysis and found that the risk model did not contain enough detail to allow specific risk events to be associated with the schedule activities they affect. Documentation supporting the cost estimate ($508 million) risk and uncertainty analysis was conducted via a Monte Carlo simulation which established an 84 percent confidence level for cost estimate uncertainty. The process by which this analysis was done is well documented and includes the contingency level range results. However, this risk and uncertainty analysis only reviewed classic cost estimate contingency and did not assess technical, programmatic or schedule risks. In addition, the independent review team found that the cost risk uncertainty analysis was done at a summary level so it does not fully reflect the uncertainty of the design costs associated with uncertainty related to quantities or prices listed. Major cost elements were crossed checked to see whether results were similar. Partially met. Documentation was provided that shows comparison of selected CMRR cost elements against cost estimates of other sites. An independent cost estimate was conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. Partially met. An independent cost estimate was not conducted by a group outside of the acquiring organization. However, an independent cost review was performed by the U.S. Army Corps of Engineers in conjunction with an experienced contractor. This independent cost review resulted in the identification of key findings which require a Corrective Action Plan. The independent cost review focused on engineering design, and nuclear facility special facility equipment engineering design. The independent cost review team had 24 key findings and recommendations. The ratings we used in this analysis are as follows: “Not met” means the CMRR provided no evidence that satisfies any of the practice. “Minimally met” means the CMRR provided evidence that satisfies a small portion of the practice. “Partially met” means the CMRR provided evidence that satisfies about half of the practice. “Substantially met” means the CMRR provided evidence that satisfies a large portion of the practice. “Fully met” means the CMRR provided evidence that completely satisfies the practice. Explanation The schedule should reflect all activities as defined in the program’s work breakdown structure, to include activities to be performed by both the government and its contractors. Detailed assessment Fully met. The schedule integrates all of the effort of NNSA, its contractor, and its major subcontractors. The schedule should be planned so that it can meet critical program dates. To meet this objective, key activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish before the start of other activities (i.e., predecessor activities) as well as activities that cannot begin until other activities are completed (i.e., successor activities) should be identified. By doing so, interdependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and measuring progress. Substantially met. While we found that about 16 percent of the activities were missing predecessors and successors, or had constraints, lags, and leads, the majority (84 percent) of the activities were logically sequenced. There are more than 2,400 activities (5 percent) with missing or dangling predecessors or successors. There are summary tasks linked with logic (3 percent), but we have determined that they do not affect the credibility of the schedule. There are 123 activities (less than 1 percent) with start-to-finish logic. There are 460 activities (less than 1 percent) that have 10 predecessors or more. There are 590 activities (1 percent) scheduled with constraints, in addition to or substituting for complete logic. The schedule should reflect what resources (i.e., labor, material, and overhead) are needed to do the work, whether all required resources will be available when they are needed, and whether any funding or time constraints exist. Substantially met. Not all activities in the project schedule are resource loaded—only 3,757 activities (8 percent) out of the 45,429 activities with positive remaining duration have resources assigned in the schedule we received. However, there is credible evidence that the program and Los Alamos manage resources in various ways outside the project schedule and that their resource solutions are fed back to the project schedule so that it is feasible given resource limits. The schedule should realistically reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, data, and assumptions used for cost estimating should be used. Further, these durations should be as short as possible and they should have specific start and end dates. Excessively long periods needed to execute an activity should prompt further decomposition of the activity so that shorter execution durations will result. Substantially met. There are 1,642 activities (4 percent) with durations 44 days or greater, which means that the majority of the activities (96 percent) have activities that are of short duration. Contributing to this is the rolling wave approach to the schedule, where the near-term activities are detailed while activities further in the future are left in large planning packages until they become near-term, at which point they are broken down into their component activities. The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with already sequenced activities. These links are commonly referred to as handoffs and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. Substantially met. As discussed previously in the “sequencing all activities,” there are activities missing predecessor and successor logic as well as the presence of constraints, lags, and leads that call into question the adequacy of horizontal traceability. Vertical traceability was confirmed. The schedule hierarchy includes five levels, increasing in detail and specificity from top to bottom. Explanation Using scheduling software, the critical path—the longest duration path through the sequenced list of key activities—should be identified. The establishment of a program’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that may occur on or near the critical path should also be identified and reflected in the scheduling of the time for high-risk activities. The schedule should identify float so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float. Detailed assessment Substantially met. This schedule’s critical path has 5,479 activities with zero or negative total float. There are so many critical activities because of a number of constraints on intermediate milestones which is causing negative float on paths to those activities. However, these activities do not all drive the final delivery. Los Alamos officials said that when they baseline the schedule, they plan to remove many of the constraints that are causing negative float. Many of these constraints are there to enable Los Alamos to monitor status of intermediate milestones. Substantially met. Of the remaining activities, 22 percent have unexplained large positive and large negative total float values. Even with agency review, these were present in the schedule. The total float values in many cases are several years long. There are 4,611 activities (10 percent) that have total float over 1,000 days or about 3.8 years. These high total float values are likely related to the incomplete logic described in the “sequencing all activities” best practice. A schedule risk analysis should be performed using a schedule built using a good critical path method and data about project schedule risks, as well as statistical analysis techniques (such as Monte Carlo) to predict the level of confidence in meeting a program’s completion date. This analysis focuses not only on critical path activities but also on activities near the critical path, since they can potentially affect program status. Minimally met. There is no evidence that a risk analysis has been conducted on this schedule or any summary schedule derived from this schedule. Los Alamos officials said that they have conducted a risk analysis using Monte Carlo simulation based on a prior and more concise schedule a full year before the version we reviewed was developed. The version we reviewed contained 90,000 activities and was developed in the Spring of 2010—a full year after Los Alamos conducted its risk analysis and Monte Carlo simulation. Los Alamos did not conduct a risk analysis on this more recent schedule, nor did it prepare and simulate a summary schedule based on this more recent schedule. The summary schedule that Los Alamos simulated was based on critical and near critical paths. This schedule comprised the main, secondary and tertiary critical paths. As a result, we believe that the schedule did not cover the entire work of the project, and therefore may have excluded some activities or paths that have risk sufficient to affect the finish date. Instead, Los Alamos selected about 2,100 activities based on total float, but this practice is risky because they may not have included all of the activities that risks in the risk register may affect. Explanation The schedule should use logic and durations in order to reflect realistic start and completion dates for program activities. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. Detailed assessment Fully met. The CMRR schedule is updated at least monthly, although much of it is updated weekly. The schedule integrity is checked after each update and metrics are compiled on problems to determine if the schedule’s integrity is improving with each update. There are no activities in the past that lack the designation of actual start or actual finish. There are some activities on or after the data date that have actual start or finish designations, but that may be because there are 15 schedules combined in the Integrated Master Schedule and some were updated somewhat after May 9, 2011. The ratings we used in this analysis are as follows: “Not met” means the CMRR provided no evidence that satisfies any part of the practice. “Minimally met” means the CMRR provided evidence that satisfies a small portion of the practice. “Partially met” means the CMRR provided evidence that satisfies about half of the practice. “Substantially met” means the CMRR provided evidence that satisfies a large portion of the practice. “Fully met” means the CMRR provided evidence that completely satisfies the practice. In addition to the contact named above, Ryan T. Coles, Assistant Director; John Bauckman; Jennifer Echard; Eugene Gray; David T. Hulett; Jonathan Kucskar; Alison O’Neill; Christopher Pacheco; Tim Persons; Karen Richey; Stacey Steele; Vasiliki Theodoropoulos; and Mary Welch made key contributions to this report.
Plutonium—a man-made element produced by irradiating uranium in nuclear reactors—is vital to the nuclear weapons stockpile. Much of the nation’s current plutonium research capabilities are housed in aging facilities at Los Alamos National Laboratory in New Mexico. These facilities pose safety hazards. The National Nuclear Security Administration (NNSA) has decided to construct a multibillion dollar Chemistry and Metallurgy Research Replacement Nuclear Facility (CMRR) to modernize the laboratory’s capabilities to analyze and store plutonium. GAO was asked to examine (1) the cost and schedule estimates to construct CMRR and the extent to which its most recent estimates reflect best practices, (2) options NNSA considered to ensure that needed plutonium research activities could continue, and (3) the extent to which NNSA's plans reflected changes in stockpile requirements and other plutonium research needs. GAO reviewed NNSA and contractor project design documents and visited Los Alamos and another plutonium facility at Lawrence Livermore National Laboratory in California. The estimated cost to construct the CMRR has greatly increased since NNSA’s initial plans, and the project’s schedule has been significantly delayed. According to its most recent estimates prepared in April 2010, NNSA determined that the CMRR will cost between $3.7 billion and $5.8 billion—nearly a six-fold increase from the initial estimate. Construction has also been repeatedly delayed and, in February 2012 after GAO provided its draft report to NNSA for comment, NNSA decided to defer CMRR construction by at least an additional 5 years, bringing the total delay to between 8 and 12 years from NNSA’s original plans. Infrastructure-related design changes and longer-than-expected overall project duration have contributed to these cost increases and delays. GAO’s review of NNSA’s April 2010 cost and schedule estimates for CMRR found that the estimates were generally well prepared, but important weaknesses remain. For example, a high-quality schedule requires a schedule risk analysis that incorporates known risks to predict the level of confidence in meeting a project’s completion date and the amount of contingency time needed to cover unexpected delays. CMRR project officials identified hundreds of risks to the project, but GAO found that these risks were not used in preparing a schedule risk analysis. As a result of these weaknesses, NNSA cannot be fully confident, once it decides to resume the CMRR project, that the project will be completed on time and within estimated costs. NNSA considered several options to preserve its plutonium-related research capabilities in its decision to build CMRR at Los Alamos. NNSA assessed three different sizes for a new facility—22,500, 31,500, and 40,500 square feet. In 2004, NNSA chose the smallest option. NNSA officials stated that cost was the primary driver of the decision, but that building a smaller facility would result in trade-offs, including the elimination of contingency space. In the end, NNSA decided to build a minimally-sized CMRR facility at Los Alamos with a broad suite of capabilities to meet nuclear weapons stockpile needs over the long-term. These capabilities would also be used to support plutonium-related research needs of other departmental missions. NNSA’s plans to construct CMRR focused on meeting nuclear weapons stockpile requirements, but CMRR may not meet all stockpile and other plutonium-related research needs. NNSA analyzed data on past workload and the expected need for new weapon components to help ensure CMRR’s design included the necessary plutonium-related research capabilities for maintaining the safety and reliability of the nuclear stockpile. However, some plutonium research, storage, and environmental testing capabilities that exist at Lawrence Livermore National Laboratory may no longer be available after NNSA consolidates plutonium-related research at Los Alamos. Furthermore, NNSA conducts important plutonium-related research in other areas such as homeland security and nuclear nonproliferation, but it has not comprehensively analyzed plutonium research and storage needs of these other programs outside of its nuclear weapons stockpile work and therefore cannot be sure that the CMRR plans will effectively accommodate these needs. As a result, expansion of CMRR or construction of more plutonium research and storage facilities at Los Alamos or elsewhere may be needed in the future, potentially further adding to costs. GAO is making recommendations to improve CMRR’s schedule risk analysis and to conduct an assessment of plutonium research needs. NNSA agreed with GAO’s recommendations to assess plutonium research needs, but disagreed that its schedule risk analysis should be revised, citing its recent decision to defer the project. GAO clarified the recommendation to specify that NNSA should take action when it resumes the project.
The ARNG performs both federal and state missions and is one of two reserve components of the Department of the Army, the Army Reserve being the other reserve component. The ARNG provides trained and equipped units ready to (1) defend property and life to the 54 states and territories and (2) respond to overseas combat missions, counterdrug efforts, reconstruction missions, and more, as needed. The Secretary of the Army is responsible for creating overarching policy and guidance for all of components of the Army, including the ARNG. The Chief of NGB is, among other responsibilities, the official channel of communication between the Department of the Army and the 54 states and territories in which the ARNG has personnel assigned, and is responsible for ensuring that ARNG personnel are accessible, capable, and trained to protect the homeland and to provide combat resources to the Army. NGB has issued guidance to ARNG personnel within the states and territories for recruiting and retention, and the adjutants general of each state are generally responsible for developing and implementing programs or policies that are consistent with NGB guidance. The Chief of NGB issued the National Guard regulation that is intended to integrate all of the recruiting and retention programs, policies, and procedures necessary for developing, implementing, and monitoring the ARNG strength maintenance program in the states and territories. Appendix VI shows selected instructions, regulations, and other criteria related to recruiting and retention. Although the Director, ARNG, has overall responsibility for maintaining policy and programs for the ARNG recruiting programs, OSD requires certain recruiting-related reports to be submitted. These include reports on the numbers of enlistment waivers, recruiting resources, recruiting production data, and recruiter irregularities. Each year, Congress, through the National Defense Authorization Act, provides the ARNG with an overall authorized end-strength. Subsequently, the Director, ARNG, develops a recruiting mission with a goal of fully utilizing that overall authorized end-strength. The Director, ARNG, provides individual end-strength goals and recruiting missions to the adjutants general of the 54 states and territories. In order to help the states and territories achieve state-level end-strength goals, the Chief of NGB, through the ARNG Strength Maintenance Division, provides the state-level ARNG in each of the 54 states and territories with funding, personnel, guidance, and training. Further, financial incentives are available in order to help personnel in the states and territories in meeting and sustaining ARNG end-strength goals. Within the Department of the Army, the Office of the Deputy Chief of Staff for Personnel is responsible for reviewing, monitoring, and evaluating the effectiveness of ARNG incentives programs. The Director, ARNG, is responsible for exercising staff supervision and management of financial incentives programs pertaining to ARNG soldiers. Within the ARNG, the ARNG-Personnel Programs, Resources, and Manpower Division (ARNG-HRM) is responsible for developing budget requests for financial incentives, developing and implementing policy, and conducting oversight. Within each state and territory, the adjutant general is responsible for development and implementation of the state strength maintenance program and has a recruiting and retention battalion that manages recruiting and retention personnel and day-to-day operations. ARNG recruiters are assigned to a recruiting and retention battalion in the 54 states and territories, and each battalion commander issues an annual mission for enlistment based on various factors with a goal of achieving the state annual end-strength goal. Military Entrance Processing Stations are responsible for testing and conducting physical examinations on applicants prior to their joining a military component. At each Military Entrance Processing Station, an ARNG Guidance Counselor is responsible for processing ARNG applicants and ensuring that all paperwork is complete and that the applicant meets eligibility standards. In contrast to how active-Army recruiters are only responsible for recruiting, ARNG recruiters are responsible for recruiting, retention, and attrition for their assigned area of operations in their assigned state or territory. To achieve the goal of fully utilizing the ARNG’s overall authorized end-strength ceiling, the ARNG-HRM works with state-level military personnel officers and recruiting and retention battalions in the 54 states and territories and adjusts annual recruiting and reenlistment missions as necessary. Further, ARNG applicants generally are placed in unit vacancies within a 50-mile radius of an applicant’s home. This approach generally limits the pool of applicants to positions in close proximity to the applicants’ homes, while active-Army applicants are not limited to a specific geographic region and are recruited for positions where available worldwide. Our prior work has reviewed military recruiting practices and made a number of recommendations to address recruiting-related issues, such as improving the use of financial incentives and oversight of recruiter activities: In November 2005, we reported that DOD lacked information on financial incentives provided for certain occupational specialties, making it difficult for the department to determine whether financial incentives were being targeted effectively. We recommended that the DOD components, including the ARNG, report all of their over- and underfilled occupational specialties, including the reasons why the occupational specialties are over- and underfilled, and to justify their use of enlistment and reenlistment bonuses provided to servicemembers in occupational specialties that have more personnel than authorized. In addition, we recommended that DOD develop a management plan to address recruiting and retention challenges. DOD partially concurred with our recommendations but did not implement them. We reported in May 2009 that the Army had substantially increased its recruiters and funding for incentives, although it had not used existing research to identify and set bonuses at dollar amounts that are the most effective. We recommended that the Department of the Army take a number of steps to ensure cost-effective measures are taken, and DOD concurred with three recommendations and partially concurred with the fourth. DOD implemented one of our recommendations regarding building on currently available analysis to help set bonus amounts. We reported in January 2010 that the military components were not consistently reporting cases of recruiter irregularities and that greater oversight by OSD was needed. We made four recommendations regarding increasing visibility and tracking of recruiter irregularities, and DOD concurred with all of the recommendations. DOD implemented three of our recommendations regarding clarifying, sharing, and tracking of recruiter irregularity data but did not implement our recommendation to include appropriate disclosures concerning data limitations in recruiter irregularity reports. We reported in July 2015 that Army reserve components did not have complete, accurate, and timely information to report soldiers’ nonavailability rates and that multiple systems did not interface in a way to allow for timely updates between all systems. We made four recommendations regarding data reliability, and DOD concurred with all of the recommendations. Appendix VII identifies our recommendations from selected prior reports and the status of DOD’s implementation. The ARNG Strength Maintenance Division has recently taken steps to increase oversight of how states and territories adhere to recruiting policies and procedures; however, the ARNG Strength Maintenance Division has not permanently established the Recruiting Standards Branch to ensure ongoing monitoring of state-level recruiting activities. The ARNG Strength Maintenance Division and the selected states we visited conduct reviews of a portion of packets from recruits. Additionally, in June 2014, the ARNG Strength Maintenance Division began a pilot effort through its Recruiting Standards Branch to conduct inspections to help provide oversight of state-level recruiting activities, but the branch has not been permanently established to ensure ongoing monitoring. ARNG Strength Maintenance Division and selected state officials stated that steps have recently been taken to provide oversight over enlistment packets at the national and state levels. ARNG Strength Maintenance Division officials stated that a portion of their oversight of the recruiting process includes a review of selected enlistment packages at the national level to help identify any errors in paperwork and any irregularities involving recruiters. Officials stated that, since fiscal year 2010, the ARNG Strength Maintenance Division has conducted reviews of 10 percent of packets from enlistees and soldiers starting military training. ARNG Strength Maintenance Division officials stated that they review every document within the selected packets and maintain electronic records of the results. In addition, officials stated that if there are deficiencies identified in the review, the ARNG Strength Maintenance Division sends a training team to help correct them and to provide retraining for staff as necessary. According to National Guard regulation, state-level recruiting officials are to conduct quality checks over enlistments. At the four selected states we visited, there were multiple reviews of packets for enlistments, and officials stated that these reviews are intended to help minimize errors and recruiter irregularities. According to National Guard regulation, recruiters are responsible for initial prescreening of the applicant, which involves a background review, an initial determination of physical eligibility, and a review of prior education, among other things. In the four selected states that we visited, recruiters use checklists to screen applicants and submit applicant packets to their respective supervisors for review prior to the packets going to Military Entrance Processing Stations where an applicant is tested, examined, and processed for enlistment into the ARNG. In the four selected states we visited, recruiting personnel had to electronically submit enlistment packets to the Military Entrance Processing Station a minimum of 48 to 72 hours before each applicant arrived at the Military Entrance Processing Station for processing. According to National Guard regulation and the ARNG’s Military Entrance Processing Station Operations Guide, each Military Entrance Processing Station is to be assigned guidance counselors who are responsible for quality-control checks designed to help prevent entry of anyone not qualified for the ARNG. The regulation and guide state that the Military Entrance Processing Station guidance counselors are responsible for reviewing all applicants’ enlistment packets submitted by recruiters for the ARNG. The Military Entrance Processing Station guidance counselor’s primary role, according to National Guard regulation, is to ensure that all qualified persons applying for ARNG enlistment complete the process, that applicants obtain a reservation for training, if necessary, and that incentive agreements are valid, among other things. ARNG Strength Maintenance Division officials noted that three regional managers oversee the guidance counselors at the Military Entrance Processing Stations and help ensure that the guidance counselors at each station are following applicable policy and guidance. Also, applicants must complete a test, called the Armed Forces Vocational Aptitude Battery, to determine the applicant’s qualification for enlistment, and a Military Entrance Processing Station physician conducts a medical examination to determine whether the applicant meets physical health standards. When the applicant has met the qualifications for military enlistment, the guidance counselor conducts another check of the paperwork, and the applicant signs an enlistment contract and is sworn into the ARNG. In June 2014, the ARNG Strength Maintenance Division began a pilot effort through its Recruiting Standards Branch to help provide oversight over state-level recruiting activities, but the branch has not been permanently established to ensure ongoing monitoring. Officials from the ARNG Recruiting Standards Branch stated that the branch was established in response to GAO’s findings in a prior report and a Department of the Army Inspector General’s report. Specifically, in January 2010, we found that the ARNG’s data on recruiter irregularities— or, wrongdoings on the part of recruiters—were incomplete and recommended that DOD take actions to increase visibility and track recruiter-irregularities. DOD concurred with our recommendations and took steps to clarify, share, and track recruiter irregularity data. Later, in February 2012, the Department of the Army Inspector General found errors in processing enlistment packages and recommended that the ARNG create an entity to provide oversight of recruiting standards. In response, the ARNG Recruiting Standards Branch was established as a pilot program and completed its first official inspection in October 2014. As of July 16, 2015 this office had completed inspections in 16 states. An ARNG Recruiting Standards Branch official stated that the goal is to complete at least 12 state inspections each year. The ARNG Recruiting Standards Branch uses a four-tiered standard scale for compliance and reporting. Each inspection results in one of four ratings: Non-Compliant, Pending Compliance, Full Compliance, or a Program of Excellence Award; the excellence award is the highest rating. The state inspections include a review of state-level recruiting procedures and programs to determine compliance with overarching guidance and a review of accession packages to determine compliance with eligibility standards and policy. Following each inspection, the ARNG Recruiting Standards Branch requires states and territories to submit corrective-action plans to address any identified deficiencies, which an official stated are used in subsequent re-inspections to demonstrate state efforts to resolve the deficiencies. Nine of the 16 states inspected as of July 16, 2015 had submitted their respective corrective-action plans to address any deficiencies identified during their inspection, regardless of the inspection rating. The Recruiting Standards Branch plans to conduct a re- inspection of each state or territory that does not meet at least the Full Compliance standard. The ARNG Strength Maintenance Division Chief is informed of the inspection results, and results are included in a newsletter sent to all states and territories. According to an ARNG Recruiting Standards Branch official, the inspections program can be effective even though ARNG does not have direct chain-of-command authority over the states and territories. The official stated that the state inspections and any associated corrective- action plans can help ARNG recruiters to comply with policy. This official cited the Army Inspector General inspection, which recommended the creation of a recruiting standards entity, as a sign of leadership’s support. The official noted that although there is no direct chain-of-command authority, state officials to date have participated in the inspections. An ARNG Recruiting Standards Branch official stated that if a state is unwilling to participate in the inspections process, the ARNG’s Chief of Staff will work with the respective state’s or territory’s Chief of Staff. ARNG Strength Maintenance Division officials stated that the inspections to date have been helpful in determining whether states and territories are in compliance with guidance and current policy. Of the 16 states inspected as of July 2015, 2 received a rating of Program of Excellence, 12 received a rating of Full Compliance, and 2 received a rating of Pending Compliance on their inspections. ARNG Strength Maintenance Division officials stated that the findings from the inspections conducted to date, along with the issues identified in our January 2010 report and the February 2012 Department of the Army Inspector General report noted above, highlight the continued need for the ARNG Recruiting Standards Branch to conduct inspections. However, the ARNG Recruiting Standards Branch remains in a pilot phase and is working to seek approval for permanent staff by the Director, ARNG, and subsequently the Department of the Army. The approval for permanent staff may not take place until early 2017. ARNG Strength Maintenance Division officials stated that they believe that continued oversight of state recruiting activities is important and that currently they are using positions for the ARNG Recruiting Standards Branch that are intended for use in other areas. Officials stated that the ability to permanently assign individuals to the ARNG Recruiting Standards Branch is very important in the ARNG’s ability to continue to exercise its oversight role. The Director, ARNG, has overall responsibility for maintaining policy and programs for the ARNG recruiting programs, and Standards for Internal Control in the Federal Government states that agencies should have control activities in place for ensuring that management’s directives are carried out. Without permanently establishing an entity, such as the ARNG Recruiting Standards Branch or other entity, to conduct inspections of state-level recruiting activities, the Director, ARNG may be limited in its ability to ensure that ARNG policies and procedures are being properly implemented by the states. The ARNG had mixed results in meeting its overall recruiting goals and nearly met its goals for initial military training; however, the ARNG does not track whether soldiers are completing their initial term of service or military obligation. The ARNG met its recruiting goals in two of the five years from fiscal years 2010 through 2014. Further, from fiscal years 2011 through 2014, the ARNG nearly met its goals for completion of initial military training, but we found that the ARNG does not have consistent, complete, and valid data on why soldiers do not complete training and when soldiers separate during the training process. We also found that while the ARNG sets and tracks goals to keep the loss of soldiers in their initial term below a maximum percentage, the ARNG does not track whether ARNG soldiers who join in a given fiscal year complete their initial term of service. Finally, ARNG Strength Maintenance Division has not periodically estimated the full cost of recruiting and training soldiers who do not complete their initial term of service. ARNG data show that from fiscal years 2010 through 2014 the ARNG met its annual overall recruiting goals in 2 of the 5 years but stated that the purpose of the recruiting goals is to fully utilize the authorized end- strength in the National Defense Authorization Act, which data show the ARNG nearly met or slightly exceeded over this time period. ARNG Strength Maintenance Division officials stated that, in addition to recruiting goals, managing losses and setting goals for reenlistments play key roles in the ARNG’s ability to meet its goal of fully utilizing its authorized end-strength. ARNG manages its end-strength, in part, by setting goals for each state and territory to recruit a certain number of individuals to enlist in the ARNG. GAO’s leading practices in strategic human-capital management and Standards for Internal Control in the Federal Government states that agencies should establish goals and monitor the extent to which they are met. Prior GAO work has shown that historically the ARNG has had mixed results in meeting its recruiting goals. Specifically, in November 2005, we reported that the ARNG exceeded its annual recruiting goals from fiscal years 2000 through 2002 but fell short of its goals in fiscal years 2003 through 2005, achieving only 80 percent of its goal in 2005. In May 2009, we reported that the ARNG made progress in meeting its annual recruiting goals since fiscal year 2005, meeting more than 95 percent of its goal in both fiscal years 2006 and 2007 and exceeding its goal in fiscal year 2008. We then noted in a January 2010 report that the ARNG met its recruiting goal in fiscal year 2009. Our analysis for fiscal years 2010 through 2014 is consistent with this historical trend as ARNG only met its recruiting goals in 2 of the 5 years. Officials stated that the purpose of the state and territory goals for recruiting is to fully utilize the ARNG’s authorized end-strength. Table 1 shows the extent to which the ARNG met annual recruiting goals as compared to the end-strength authorized by the National Defense Authorization Acts from fiscal years 2010 through 2014, as reported by ARNG. ARNG-Personnel Programs, Resources, and Manpower Division (ARNG- HRM) officials stated that ARNG’s recruiting goals have generally decreased from fiscal years 2010 through 2014, in part because the ARNG’s authorized end-strength also decreased over this time period. The President is permitted by section 123a of Title 10 of the United States Code to waive the NDAA end-strength limitations under certain circumstances. Pursuant to a delegation of that authority, the Army granted the ARNG a waiver to exceed the NDAA authorized end-strength in fiscal years 2010 and 2011. However, the officials stated that the waiver was no longer granted in fiscal years 2012 through 2014, thus requiring the ARNG to stay below the authorized end-strength and to reduce its annual recruiting goals. While the ARNG met its recruiting goals in only 2 of the 5 years from fiscal years 2010 through 2014, the ARNG achieved or nearly achieved its goal of fully utilizing its authorized end-strength in all of the years as shown in table 1. When setting goals for the states and territories, the ARNG emphasizes that attrition management has a significant effect on the ARNG’s ability to utilize its authorized end-strength and that, in addition to setting recruiting goals, the ARNG meets its end-strength by setting goals for managing losses and retaining existing personnel. Since fiscal year 2009, the ARNG has established annual goals for the states and territories to reenlist a certain number of individuals nearing the end of their term of service. ARNG data showed that it either exceeded or nearly exceeded its reenlistment goal in 4 of the 5 years from fiscal years 2010 through 2014. Table 2 shows the extent to which the ARNG met reenlistment goals from fiscal years 2010 through 2014. ARNG-HRM officials stated that the ARNG has increased its reenlistment goal over time because the number of individuals who joined the ARNG greatly increased from fiscal years 2006 through 2009, in part, due to the Grow the Force initiative. The officials stated that the increased number of soldiers who joined during this time period became eligible to reenlist from fiscal years 2012 through 2014, thus increasing the population of soldiers eligible for reenlistment. ARNG Strength Maintenance Division officials noted that the ARNG achieved a lower percentage of its reenlistment goal in fiscal year 2013 because ARNG wanted to emphasize reenlistments in this year and set a more aggressive goal in comparison to other years. For example, in fiscal year 2012, the ARNG set a goal to reenlist 48,446 soldiers out of an eligible population of 125,785 soldiers, while in fiscal year 2013 set a goal to reenlist 59,233 soldiers out of an eligible population of 121,624 soldiers. The ARNG nearly met its goals for completion of initial military training from fiscal years 2011 to 2014; however, we identified inconsistencies in how states recorded reasons that soldiers did not complete their training and found that available Army training data does not provide the ARNG with complete data on the timing of when soldiers leave during the training process. Further, while the ARNG uses an internal database to collect information on why soldiers do not complete training and when they separate during the training process, ARNG officials stated that they could not determine whether the data were valid. The ARNG nearly met its goals for completion of initial military training, which includes basic and advanced individual training, from fiscal years 2011 through 2014. From fiscal years 2011 through 2014, the ARNG set a goal of at least 84 percent and achieved a completion rate of approximately 81 to 82 percent in each of those years. The ARNG sets and tracks several goals that focus on states’ and territories’ ability to prepare their recruits to attend initial military training. One such goal is based on the percentage of soldiers who completed initial military training (both basic and advanced) compared to the number of soldiers who began training and did or did not complete training for a rolling time period covering the past 12 months. By law members of the ARNG that have not completed the minimum training required to deploy within 24 months must be discharged. ARNG Strength Maintenance Division officials stated that they have not separately set a goal for the extent to which soldiers complete basic training because the ARNG is primarily concerned with soldiers completing both basic and advanced training to become qualified for their military occupation. Table 3 shows the ARNG’s goal for completion of initial military training, when available, and ARNG completion rates from fiscal years 2011 through 2014. The percentage of ARNG soldiers who completed their initial military training has generally increased annually from about 70 percent in fiscal year 2004 to about 81 percent in fiscal year 2014. ARNG Strength Maintenance Division officials attributed the improvements in completion of training largely to the ARNG Recruit Sustainment Program, which began in fiscal year 2005. The purpose of the ARNG Recruit Sustainment Program is to increase the likelihood that ARNG soldiers will complete initial military training by ensuring that recruits are mentally prepared and physically fit prior to attending training. The program aims to provide recruits with realistic training that is similar to the first 3 weeks of basic training. In addition, recruiters stated that the ARNG Recruit Sustainment Program allows the ARNG to maintain contact with recruits while they wait to attend training and to monitor their conduct and educational progress to help ensure they stay eligible to join. For states or territories that struggle to meet ARNG’s goal for training completion, ARNG Strength Maintenance Division officials stated that they share best practices from states that are meeting or exceeding ARNG’s goal or send out ARNG mobile training teams to the states or territories to help address challenges. We identified inconsistencies in how the four selected states we visited recorded reasons that soldiers did not complete their initial military training. ARNG regulation and guidance require states and territories to report the reasons why soldiers leave the ARNG in the ARNG personnel database of record known as the Standard Installation/Division Personnel System (SIDPERS). Although not generalizable to all states and territories, we found that states we reviewed varied in whether they selected only a general category in the system about the timing of a soldier leaving initial training versus selecting a category noting the specific reason each soldier left training prior to completion. When soldiers leave training prior to completion, officials from states and territories are to select the reason why the soldier left the ARNG from a list of over 100 pre-predetermined categories, such as alcohol or other drug abuse or medically unfit at the time of appointment. In interviews with officials from the four selected states that we visited, officials provided different responses about how they select a category regarding why soldiers left the ARNG before basic or during basic or advanced training. For example, one official stated that he selected general categories about timing, such as if a soldier left before attending basic training or left during basic or advanced training; however, that state’s officials did not select a category that specified the reason why the soldier left the ARNG. In contrast, officials in another state stated they have selected 13 categories about specific reasons and use the general categories about timing for soldiers that left before basic training or during basic or advanced training from April 2014 through March 2015. Table 4 below shows the contrasts in how these two state officials chose general or specific categories regarding soldiers leaving training. Further, we found that available Army training data do not provide the ARNG with complete data on the timing of when soldiers leave during the training process. According to GAO’s leading practices on strategic human-capital management, reliable data help enable an agency’s decision makers to evaluate the success of their human-capital approaches and to identify opportunities for enhancing agency results. The Army’s training system of record known as the Army Training Requirements and Resources System contains soldiers’ training records, including the dates soldiers completed basic training and advanced training. However, we found that this system was missing completion dates from basic training for a significant number of soldiers who should have had dates listed. Specifically, we found that of the 134,293 non- prior-service enlisted soldiers who joined the ARNG from fiscal years 2010 through 2014 and completed their initial military training as of April 15, 2015, 36,644—or 27 percent—were missing basic training completion dates. ARNG-HRM officials attributed this missing information, in part, to data on the soldiers who attended basic and advanced training at the same training site—referred to as One Station Unit Training, leading the school to only record one date for completion of both basic and advanced training. ARNG-HRM officials stated that they use the information on basic training completion in the Army system to track whether recruits are completing basic training but that they understood there were data limitations due to the missing information in the system. The level of incompleteness in the data for basic training completion, however, raises concerns about whether ARNG can use this system to determine the timing of when soldiers left during their initial military training. Further, we found that when the One Station Unit Training sites report discharges from training, the reports do not indicate whether the soldiers were discharged during basic training or advanced training. According to a fiscal year 2014 discharge report for the Army training schools, of the 3,352 soldiers who were discharged from training sites, 1,037 soldiers— or 31 percent—were discharged from a One Station Unit Training site. As a result, the ARNG would not have visibility into whether these soldiers were discharged during basic training or advanced training. ARNG Strength Maintenance Division officials acknowledged that the databases of record for ARNG personnel and Army training data do not offer the level of detail they need to determine the reasons why soldiers left before or during initial military training or when soldiers separated during the training process. ARNG Strength Maintenance Division and ARNG-HRM officials stated that they were aware that there could be inconsistencies in how states and territories select the category to describe the reason why soldiers left the ARNG before or during training. The officials attributed the inconsistency in part to the availability of the general categories in SIDPERS for soldiers who leave before beginning basic training or prior to completion of advanced training. ARNG-HRM officials noted that the 54 states and territories each enter the information into SIDPERS, which likely result in inconsistencies in data entry across the states. Further, as noted above, the system of record for data on soldiers’ training records is the Army Training Requirements and Resources System, and the system of record for data on why soldiers leave the ARNG is SIDPERS, rather than one centralized data source. In July 2015, we reported that multiple data systems used to track soldier availability data did not interface in a way to allow for timely updates between all systems to ensure the relevance and value of the data that management uses to make soldier availability-related decisions. We recommended that the Secretary of the Army develop and implement ways that the Army reserve components can facilitate timely updates of availability data between all data systems through the current system interfaces to improve the relevance and value of the data that management is using to make soldier availability-related decisions, and DOD concurred with our recommendation. Recognizing the gap in information on why soldiers did not complete training and the timing of when they separated during the training process, ARNG Strength Maintenance Division officials stated that they started to collect this information in fiscal year 2010 by modifying a management tool—known as the Vulcan Recruit Sustainment Program Database—used to track recruits while they are in the training process. ARNG Strength Maintenance Division officials emphasized that the database is an internal management tool and not a database of record and is therefore generally not used to report information outside of ARNG. According to the officials, the ARNG modified the tool to capture when a soldier separated during the training process, such as before basic training, during basic training, or during advanced training, as well as the reason for the loss. The officials stated that while the categories for reasons are similar to those in SIDPERS, ARNG removed the two general categories for separating before or during initial military training to require the states and territories to select the specific reason why the soldier left the ARNG. ARNG Strength Maintenance Division officials stated that they did not modify SIDPERS to capture this information because the Army is in the process of transitioning to an Army-wide personnel database, the Integrated Personnel and Pay System-Army. The officials stated that they have not been allowed to make changes to SIDPERS since at least 2007 in anticipation of the new system. In February 2015, we reported that the full deployment of the Integrated Personnel and Pay System-Army is not expected until April 2020, and that the Army had not developed any portion of the system as of November 2014. As part of a broader review of the ARNG Recruit Sustainment Program, in September 2008 the U.S. Army Audit Agency reported that the Vulcan database sometimes did not provide accurate and timely data for Recruit Sustainment Program managers. The U.S. Army Audit Agency found that the Vulcan database provided useful information, but its effectiveness was limited because program managers at the state level sometimes did not update or use the system as the preferred management tool. According to the report, the program managers did not use the Vulcan database because it did not provide information that the states needed to monitor recruit status, the database was not user-friendly, and the ARNG did not routinely provide formal training to users. The U.S. Army Audit Agency concluded that the data in the Vulcan system were not reliable for making sound management decisions and made four recommendations to address the issues identified, including developing and providing routine formal training to Vulcan users and ensuring that state ARNG organizations use the Vulcan database to manage the Recruit Sustainment Program and not locally developed systems. The ARNG agreed with the report’s recommendations and, according to ARNG Strength Maintenance Division officials, the ARNG has taken steps that are intended to address the report’s recommendations. For example, ARNG Strength Maintenance Division officials stated that states’ use of the Vulcan database is continuously managed by means of daily reviews and validated during the branch leadership’s accreditation process. Further, ARNG Strength Maintenance Division officials stated that, since the U.S. Army Audit Agency audit, the ARNG instituted the role of training liaison officers who act as liaisons to the Active component training facilities in order to manage ARNG recruits at training as well as contracted administrative support for the Recruiting Sustainment Program. According to ARNG Strength Maintenance Division officials, both the training liaison officers and Recruit Sustainment Program contract support staff now play a role in maintaining information in the Vulcan database. In July 2015, the U.S. Army Audit Agency started a follow-on review of the ARNG Recruit Sustainment Program, which includes reexamining the Vulcan database and evaluating the measures that the ARNG has taken to address the deficiencies described in the September 2008 report. The ARNG’s personnel database of record, SIDPERS, and the Army’s database of record on training, the Army Training Requirements and Resources System, do not provide ARNG with full visibility into why soldiers do not complete initial military training and when they separate during the training process. Further, while ARNG has modified its internal Vulcan database to capture this information, ARNG-HRM officials stated that they could not determine the information to be valid because inputting the information is voluntary, and the Vulcan database is not the database of record on losses from the ARNG. According to GAO’s leading practices on strategic human-capital management, a critical success factor is using consistent, complete, and valid data to determine key performance objectives and goals. The Director, ARNG, has overall responsibility for maintaining policy and programs for the ARNG recruiting programs, and ARNG-HRM officials stated that they use the data on reasons why individuals left the ARNG to develop policies to help retain soldiers. Without consistent data about specific reasons soldiers left the ARNG before or during training in the ARNG database of record, officials will continue to be limited in their ability to identify actual reasons for separation. Further, without complete information on when soldiers separate during the training process, ARNG cannot know the extent to which soldiers are leaving during basic or advanced training. Such limitations hinder the Director, ARNG’s ability to develop policies and programs intended to help create an environment in which a higher number of soldiers complete training. ARNG Strength Maintenance Division does not track whether ARNG soldiers complete their initial term of service. When individuals join the ARNG, they sign a contract to actively serve in the ARNG for a specified amount of time, which varies by soldier. For example, while a non-prior- service enlisted soldier must enlist in the ARNG for a total military service obligation of 8 years, a portion of the 8 years can be active service in the ARNG with the balance being in the Individual Ready Reserve. On an ongoing basis, the ARNG Strength Maintenance Division tracks initial- term attrition rates—the number of soldiers in their initial term who leave the ARNG during a given period compared to the average number of soldiers who were serving in their initial term over that same period—with the goal of keeping attrition below an established maximum rate. For the purposes of tracking initial-term attrition rates, the ARNG does not track all enlisted soldiers who join, but defines soldiers in their initial term as enlisted soldiers who have completed initial military training and have less than 6 years in service. ARNG Strength Maintenance Division officials stated that they establish the goal based on the ARNG’s historical performance and that the metric is adjusted over time to encourage incremental improvement. ARNG officials have established an attrition goal of a percentage of less than or equal to 12 percent for soldiers leaving during their initial term of service, and as of May 2015 the ARNG had an attrition rate of 8.1 percent. According to the ARNG, managing attrition has a significant effect on the ARNG’s ability to achieve its end- strength goal, and ARNG Strength Maintenance Division officials stated that they track initial-term losses in this way because it helps states and territories manage their respective end-strengths by better anticipating future losses and the ARNG can include the most recent enlistments in its analysis for initial-term losses. While the ARNG’s calculation of the initial-term attrition rate provides ARNG Strength Maintenance Division with some information that helps officials manage end-strength, the ARNG Strength Maintenance Division does not regularly track whether all soldiers who join in a given fiscal year ultimately complete their initial term of service. We obtained data on enlisted soldiers who joined the ARNG from fiscal years 2001 through 2007, and analyzed whether they ultimately completed their initial term of service; we found that about 40 percent of these soldiers did not complete their initial term of service (see table 5). According to GAO’s leading practices on strategic human-capital management, valid data help enable an agency’s decision makers to evaluate the success of their human-capital approaches and to identify opportunities for enhancing agency results. ARNG Strength Maintenance Division officials stated that there could be some advantages to tracking the extent to which soldiers complete their initial term of service by those who join in a given fiscal year, but it may be viewed as redundant reporting given that the ARNG already tracks attrition rates for initial-term soldiers. Further, ARNG Strength Maintenance Division officials stated that tracking soldiers from the date of their enlistment to their final completion is a complex task. However, regularly tracking the extent to which soldiers who join the ARNG in a fiscal year and complete their initial term of service can help the ARNG understand what human-capital decisions may have led to certain trends in data. For example, ARNG Strength Maintenance Division officials stated that multiple factors may have contributed to 40 percent of soldiers we identified as not completing their initial term of service from fiscal years 2001 through 2007. The officials noted that this time period was at the height of the troop surge in Iraq, where many of the quality metrics were loosened across the Army in order to meet expanded end-strengths. As noted above, the officials stated that the Recruit Sustainment Program, the program to which they attribute higher completion rates for initial military training since fiscal year 2005, was not in full force until the latter part of this time period. Further, tracking completion by soldiers who join in a given fiscal year can help the ARNG identify the points in time during soldiers’ enlistments when they are more likely to separate from the ARNG. Without the ARNG regularly tracking the extent to which soldiers complete their initial term of service and understanding trends in the data, officials do not have full visibility into the effect the ARNG’s programs and initiatives have in helping states meet their strength and readiness requirements. In addition to not tracking the extent to which soldiers are not completing their initial term of service, the ARNG had not estimated the total costs to recruit and train an ARNG soldier. In response to our review, ARNG Strength Maintenance Division officials estimated that in fiscal year 2014, it cost the ARNG approximately $62,000 to recruit and train an ARNG soldier for those soldiers who attended basic training and advanced training at separate training sites or approximately $51,000 for soldiers who attended basic training and advanced training at the same training site. According to officials, a portion of these estimates includes the salary paid to the soldier while in training but also includes enlistment incentives and the administrative costs to process the soldiers, among other things. ARNG Strength Maintenance Division officials stated, however, that the estimate includes costs other than those that support recruiting, such as resources used to manage attrition and retain personnel and that additional analysis is needed to further refine this estimate. For active-duty soldiers in the Army Active Component, the Army has estimated that recruiting and training cost about $72,000 per soldier who attended basic training and advanced training at separate training sites or about $54,000 for soldiers who attended basic training and advanced training at the same training site in fiscal year 2014. As mentioned above, the ARNG has recruited about 190,000 soldiers from fiscal years 2011 through 2014, and not all of these individuals completed initial military training. However, ARNG Strength Maintenance Division had not estimated the ARNG’s costs for recruiting and training soldiers, and officials stated they would have to frequently update the calculation because the associated costs change over time. While we recognize costs for recruiting and training a soldier can change over time, it is important for the ARNG to periodically estimate these costs, such as during an annual budget cycle or other time period as appropriate, because it would better enable the ARNG to know how it is spending its resources. According to GAO’s leading practices on strategic human- capital management, agencies should have valid data for determining whether they are maximizing their human-capital investments and that data gathered are kept current. Without periodically estimating the cost to recruit and train an ARNG soldier, ARNG Strength Maintenance Division does not know the extent of its investment in soldiers and the potential loss of investment when soldiers do not complete training or their initial term of service. Having this information could be particularly important in light of our analysis above showing that about 40 percent of soldiers who joined from fiscal years 2001 through 2007 did not complete their initial term of service. The ARNG has some internal controls for processing its financial incentives but has not ensured that recruiting officials understand available financial incentives to fill critical military positions, and OSD, the Department of the Army, and ARNG-HRM have not exercised all of their oversight responsibilities for ARNG financial incentives programs. OSD has not monitored the costs associated with ARNG incentives programs, which the services are required to report by DOD instruction. In addition, the Department of the Army and ARNG-HRM have not evaluated and documented an evaluation of the effectiveness of the financial incentives. Department of the Army and National Guard regulations require the Department of the Army Office of the Deputy Chief of Staff for Personnel and ARNG-HRM to evaluate the effectiveness of the financial incentives programs. The ARNG has a system of internal controls to help monitor compliance with financial incentives contracts. Beginning in fiscal year 2012, ARNG- HRM implemented the Guard Incentive Management System (GIMS) to help establish internal controls when awarding financial incentives and processing incentives payments. The management system was implemented in response to a 2010 study that a contractor conducted for the ARNG, which found deficiencies in quality controls for its previous incentives processing system. For example, the study found that it was not clear how many people had access to or how frequently they used the incentives system. Further, the 2010 study found that the previous system did not capture all requests for payments, creating the inability to accurately manage the funding for the programs, and found that the system did not monitor and validate that a soldier remained qualified to receive an incentive. Financial incentives are to be awarded and monitored within GIMS, which (1) establishes control by setting user levels and limitations on transactions that users are able to complete regarding incentives; (2) monitors a soldier’s compliance with his or her incentives contract, and can change the incentive payments to an on-hold status and withhold payments until violations can be addressed (if at all); and (3) processes and releases payment notifications per the contract schedule, assuming the soldier’s incentive payment is not flagged as in on-hold status. Some controls within GIMS are similar to control activities suggested by the Standards for Internal Control in the Federal Government, which states that attributes to internal control activities include dividing key duties and responsibilities among different people in order to reduce the risk of error, waste, or fraud. ARNG officials at all four selected states we visited said that GIMS reduces the possibility that soldiers are awarded incentives that are not in accordance with regulation. For example, the state-level incentives manager has the responsibility to monitor incentives awarded within his or her respective state and only officials within the incentives managers’ office have the authority to review and approve incentives actions, while the Military Entrance Processing Station guidance counselor is responsible for issuing incentives to a soldier. Officials stated that GIMS does not allow for individuals to perform duties outside of their responsibility, which greatly reduces the risk of fraud and improper incentives activities. Further, officials at all four selected states we visited said that GIMS greatly reduces the possibility that soldiers are awarded incentives or receive incentives payments if they do not meet the requirements for the respective incentive. For example, officials stated that soldiers cannot receive additional incentive payments until the soldier passes his or her physical training test. Officials stated that in order to receive a payment, GIMS requires the soldier’s commander to verify that the soldier has passed the most recent test. If the soldier has not passed the most recent physical training test, GIMS will flag the soldier as ineligible for receiving payment. GIMS utilizes an algorithm that considers factors such as unit fill rate and time until deployment that determines which positions, when filled, provide the applicant with incentives. The algorithm also determines the amount of the financial incentive that the applicant will receive if the applicant meets the eligibility requirements for that position. Recruiters access the information in GIMS to determine whether an available position has an incentive attached. According to the ARNG’s financial incentives policy, positions are assigned an incentive tier level corresponding to how critical the position is. For example, a position scored as a tier level 1 is considered most critical and has the greatest amount of incentives, while a position scored as a tier level 7 is considered not critical and does not have any incentives. Further, according to the policy, recruiting and retention financial incentives are intended to assist in filling critical shortages. The policy also states that ARNG-HRM is to develop and implement policy for ARNG incentives programs and that the Chief of NGB, through the ARNG Strength Maintenance Division, is responsible for developing strength maintenance guidance, programs, and training. ARNG officials from all of the four selected states that we visited stated that they did not understand which vacant ARNG positions were considered critical and had incentives attached. These state recruiting officials stated that because they did not understand which positions were considered critical and had a financial incentive attached, it was difficult to utilize financial incentives as a recruiting tool. For example, during our review the algorithm within GIMS was initially being updated on a daily basis and officials stated that the constant change was a contributing factor in making it very difficult to understand which positions have a financial incentive. ARNG-HRM officials stated that, based on feedback, the algorithm was changed in January 2015 and was now being updated on a monthly basis instead of a daily basis. However, ARNG-HRM officials stated that the algorithm change to being updated on a monthly basis confused state-level recruiting personnel as to how positions that become vacant within the month are considered for criticality and incentives between the monthly updates. ARNG-HRM officials stated that there are tools, such as a search function, within GIMS for state recruiting personnel that would assist in understanding which positions are considered critical and have an incentive. However, officials stated that the tools are not being fully utilized, which is another contributing factor in not understanding which positions have a financial incentive. Part of the reason for confusion is that the ARNG Strength Maintenance Division and ARNG-HRM has not provided recruiters with training to help enable them to effectively use financial incentives to fill critical positions. The ARNG’s training curriculum instructs recruiters to identify the motivator for each applicant and to use that motivator as leverage to gain the applicant’s commitment to join the ARNG. For example, some applicants may be motivated to join the ARNG in order to gain certain job skills or family tradition of service in the ARNG. Recruiters we interviewed stated that they are trained to persuade applicants to join the ARNG based on areas other than financial incentives, such as service to country and skills training, but the training did not teach recruiters how to use financial incentives to fill critical positions. While officials maintain that recruiters should primarily utilize motivators to gain the applicant’s commitment to join the ARNG, ARNG Strength Maintenance Division and ARNG-HRM officials stated that additional training for recruiters on how to utilize tools to understand which positions have incentives may help in more effective use of financial incentives. Financial incentives are a tool available to recruiters, and ARNG incentives policy states that the incentives assist leadership in meeting and sustaining ARNG readiness requirements and provides incentives to assist in filling critical shortages. Incentives are to be implemented in those situations where other less costly methods have proven inadequate or ineffective in supporting unit and skill staffing requirements. Standards for Internal Control in the Federal Government states that it is necessary to provide personnel with the right training and tools, among other items, to ensure operational success. Without providing recruiters with training on how to utilize available tools, such as use of financial incentives to fill positions, recruiters may not have an understanding of which military positions are considered critical and are not best positioned to utilize available financial incentives to fill these positions. OSD has not monitored the amounts of incentives obligated through the ARNG incentives programs. A DOD instruction requires the tracking and reporting of recruiting resources throughout DOD, including the obligation of incentives to, among other things, help ensure that DOD is using the most efficient and cost-effective processes in the recruitment of new personnel. For example, these reports must contain information on the amount of obligations on college-fund contributions, enlistment bonuses, and student loan repayments, among other recruiting costs. According to the instruction, the information collected through the required reports is intended to help formulate policy guidance and oversight and ensure mission success. OSD has collected information on recruiting resources for the Active components through these reports, but OSD officials stated that the information does not include the amounts obligated through the incentives programs in the National Guard and Reserve components. The requirement for the National Guard and Reserve components to report this information to OSD has been in effect since at least 1991, but officials stated that turnover in staff and office reorganizations that began sometime after 2004 resulted in OSD no longer collecting and reviewing the information. In response to our review, in July 2015 the officials stated that OSD has plans to include information on the amounts of incentives obligated from the National Guard and Reserve components in the next reporting cycle of October 2015 and in future reports. Without information on the amounts obligated through National Guard- and Reserve- component incentives programs, OSD cannot effectively develop policies and guidance to help ensure that recruiting resources are used efficiently and in a cost-effective manner throughout DOD. The Department of the Army has reviewed and approved the ARNG’s financial incentives policy and has recently issued a directive that expands its oversight; however, the Department of the Army and ARNG- HRM have not evaluated and documented the effectiveness of the financial incentives programs in achieving overall objectives. The ARNG obligated about $836 million in financial incentives from fiscal years 2012 through 2014, which includes enlistment bonuses, student loan repayment, and reenlistment bonuses, among other incentives. The amount obligated for ARNG incentives programs decreased over this time period, from $348 million in fiscal year 2012 to $206 million in fiscal year 2014. ARNG officials noted that obligations related to ARNG financial incentives programs decreased over this time period because of budgetary constraints. According to Department of the a Army regulation, the Department of the Army Office of the Deputy Chief of Staff for Personnel has responsibility for conducting a semiannual review of the financial incentives program. Further, in September 2015 the Secretary of the Army issued a directive that requires all new accession incentives created by Department of the Army components, including the ARNG, to be reviewed and approved by the Department of the Army. In addition, the directive requires all Department of the Army components to submit all current incentives programs for Department of the Army review and approval. A Department of the Army official stated that it meets its review requirement by ensuring that ARNG financial incentives policy complies with applicable laws and Army regulations. The official stated that if the ARNG determines that no midyear updates are necessary, then the Department of the Army does not conduct an additional review of ARNG incentives policy. As previously noted, the ARNG Strength Maintenance Division and ARNG-HRM have not ensured that recruiting officials receive training and understand available financial incentives to fill critical military positions, and ARNG-HRM officials stated that they are aware that there is some confusion at the state level over which positions are considered critical. In November 2005, we reported that of the 1,500 enlisted occupational specialties across DOD, 19 percent were consistently overfilled and 41 percent were consistently underfilled from fiscal years 2000 through 2005. Moreover, we found that active-duty components provided bonuses to servicemembers in consistently overfilled occupational specialties. We recommended that DOD require its 10 components to report annually on all (not just critical) over- and underfilled occupational specialties; provide an analysis of why occupational specialties are over- and underfilled; and report annually on and justify their use of enlistment and reenlistment bonuses provided to servicemembers in occupational specialties that exceed their authorized personnel levels. DOD partially concurred with our recommendation, stating that it has visibility over skills deemed most critical for retention and that our definition for over- and underfilled specialties was unreasonably strict. Our recommendation was not implemented. Our current review found that in the ARNG, incentives are not always being used to fill military occupational specialties that are consistently below authorized levels and that incentives are being sometimes used for military occupational specialties that are consistently above approved levels. Specifically, we found that there were several military occupational specialties that were consistently below 80 percent of approved rates from fiscal year 2012 though fiscal year 2014. These military occupational specialties were in areas including electronic warfare, explosives ordnance disposal, and special forces, which have all been identified as important to DOD. For example, data provided by ARNG showed that while one special forces military occupational specialty was only at 70, 66, and 69 percent filled from fiscal years 2012 through 2014, respectively, only 14 contracts containing incentives were approved for individuals in these occupational specialties during that time frame. ARNG officials stated that, depending on the demographics of a given area in close proximity to a unit and the requirements to fill positions within a particular unit, it may be difficult to find applicants who meet the qualifications of an available position. Further, data provided by the ARNG also showed that some military occupational specialties were consistently filled over approved levels yet hundreds of contracts containing incentives were approved for individuals in these positions from fiscal years 2012 through 2014. For example, one supply military occupational specialty was at 118, 116, and 113 percent of authorized levels from fiscal year 2012 through 2014, respectively, and yet over 880 contracts containing incentives for these positions were awarded during that time frame. Officials stated that incentives can be awarded to positions even when the national authorized level is above 100 percent if, for example, the unit or state-level fill rates are low. Furthermore, the Department of the Army and ARNG-HRM have not exercised their oversight responsibilities to evaluate and document the effectiveness of the ARNG’s financial incentives program in achieving overall objectives. A Department of the Army official stated that he believed that it was not the role of Department of the Army to monitor and evaluate the effectiveness of the ARNG’s financial incentives program, and that he believed that it was the ARNG-HRM’s responsibility to do so. According to ARNG-HRM officials, the effectiveness of the incentive programs is evaluated on a regular basis. However, ARNG-HRM officials have not documented results of any evaluations or documented that their current financial incentives programs are meeting overall objectives. Department of the Army and National Guard regulations state that the Department of the Army Office of the Deputy Chief of Staff for Personnel and ARNG-HRM, respectively, will monitor and evaluate the effectiveness of the ARNG financial incentives program in achieving overall objectives. A National Guard regulation states that ARNG incentives serve as extraordinary measures to assist the ARNG in meeting and sustaining personnel requirements, help meet quality and skill-match objectives, stabilize the ARNG through longer service commitments, assist in filling critical skill shortages, and support deploying and high-priority units. However, Department of the Army and ARNG-HRM officials have not documented that the ARNG incentives programs are meeting the goals listed in the National Guard regulation. Moreover, recruiting officials at all of the four selected states we visited stated that there are cases where applicants enlist in the ARNG for nonfinancial reasons, such as service to country, and have still been awarded financial incentives. Without the Department of the Army and ARNG-HRM evaluating and documenting the effectiveness of ARNG incentives programs in meeting its goals and documenting the results, they may not know whether incentives are being used effectively to meet and sustain program goals and whether incentives are being awarded to fill critical occupational specialties. In light of the Department of the Army’s downsizing and ongoing fiscal uncertainty, and given the importance of the ARNG to help meet Army missions, it is critical for the ARNG to oversee its recruiting process and to maximize its return on investment it incurs with recruits. In response to findings from our prior work as well as others, the ARNG has taken steps to increase its oversight of the recruiting process. However, the Recruiting Standards Branch, which has played a key role in ARNG oversight of state-level recruiting activities, is in a pilot phase awaiting approval and is not permanently established. In addition to continued attention on oversight, the ARNG must have relevant, timely information that provides visibility over a soldier’s career, including the recruiting process and training for his or her military occupation, through the soldier’s completion of his or her initial term of service. While the ARNG has increased its percentage of soldiers who complete initial military training, available data do not provide the ARNG with full visibility into when or why a soldier does not complete initial military training or may not be reliable. Further, the ARNG’s approach to tracking soldiers does not include whether soldiers who join in a given fiscal year complete their initial term of service. Moreover, the ARNG has not periodically estimated the total cost to recruit and train a soldier. Such information could be useful to decision makers to help understand the return on investment in recruiting and training a soldier. Although the ARNG implemented a new financial incentives system in fiscal year 2012, the ARNG has not provided training to help ensure that recruiters understand what financial incentives are available to help fill critical positions. Moreover, OSD, the Department of the Army, and the ARNG have not fully conducted their oversight responsibilities of ARNG incentives programs. Though the National Guard and Reserve components are required to provide information on their incentives programs, OSD has not enforced the requirement since around 2004, and while the Department of the Army and ARNG are required to assess the effectiveness of the ARNG financial incentives programs, they have not evaluated or documented their assessments of the programs. Given the number of occupations that are not at full strength and given the current constrained fiscal environment, it is critically important for DOD to know that incentives are being obligated effectively and that they are achieving the goal of helping to fill critical positions. We recommend that the Secretary of the Army take the following six actions: To aid ARNG officials in conducting their oversight of the states and territories, direct the Director, ARNG, to establish a permanent program for monitoring state-level recruiting activities either by extending the Recruiting Standards Branch or establishing some other similar program. To aid ARNG officials in understanding the effectiveness of efforts to meet force requirements, direct the Director, ARNG, to do the following: Take steps to help ensure that the ARNG collects consistent, complete, and valid data on the specific reasons why soldiers do not complete initial military training and when these soldiers separate from the ARNG during the training process. Such steps could include modifying SIDPERS to capture this information or if unable to modify SIDPERS, taking actions to ensure that information collected in the Vulcan Recruit Sustainment Program database is valid. Regularly track whether ARNG soldiers who join in a given fiscal year complete their initial term of service. Periodically estimate, such as on an annual basis or other time period as appropriate, the total cost of recruiting and initial training for a soldier who joins the ARNG. To help ARNG officials in using financial incentives to fill critical positions as required by Army and National Guard regulation, direct the Director, ARNG, to provide recruiters with training to better enable the use of available financial incentives. To help determine whether ARNG officials are effectively using financial incentives, in conjunction with the Director, ARNG, exercise their oversight responsibilities by evaluating and documenting the effectiveness of ARNG’s incentives program in meeting its goals. The evaluation should also determine whether incentives are being effectively awarded in military occupational specialties that have been under or over authorized levels, and whether changes are needed to effectively use existing incentives. Given that the reporting of information related to the amounts of incentives obligated has been a requirement but not carried out in recent years, we recommend that the Office of the Secretary of Defense take the following action in order to ensure continued reporting in the future: Enforce its requirement for the National Guard and Reserve Component to submit information on the amounts of incentives obligated and incorporate the required information in the recruiting resources reports. We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with all seven of our recommendations, but stated that it did not concur with our report due to our description of recruiting policies that were in place during the last nearly 15 years of war. DOD’s comments are reprinted in appendix VIII. DOD also provided technical comments that we considered and incorporated as appropriate. Regarding its statement that the department does not concur with our report, DOD stated that we portray both Army and ARNG recruiting efforts as being targeted at sexual offenders and that we assert that during hostilities in Iraq and Afghanistan that both the Army and ARNG made standard practice of issuing enlistment waivers for convicted sexual offenders. We disagree with that assertion. In our report, we describe the recruiting policies that were in place during a difficult time for military recruiting and have since changed. Further, in the technical comments provided to us, DOD stated that “At the height of Operations Enduring Freedom and Iraqi Freedom, the Department of the Army and the National Guard Bureau (NGB) accepted lower quality applicants (lower aptitude scores and moral waivers) and offered significantly higher incentives in order to ensure that the Army National Guard (ARNG) and Active Army could meet their respective missions, achieve end strength goals and provide ready units to combatant commanders.” DOD also provided additional context regarding factors that have made the recruiting environment increasingly more challenging. Based on DOD’s technical comments, we have added context to our final report regarding our description of the Army and ARNG’s past recruiting efforts. Specifically, our report now states that, “since the end of Operation Iraqi Freedom and a significant drawdown from Afghanistan, the Department of the Army and NGB have issued guidance prohibiting approval of waivers for applicants with prior criminal offenses such as certain types of sexual offenses, as well as increasing the applicant aptitude score standards. These changes and other factors such as a less physically fit youth population have reduced the pool of qualified applicants for the ARNG and make it more difficult for recruiters to meet defined recruiting goals.” In April 2012, the Director of Military Personnel Management issued a memorandum entitled Suspension of Enlistment Waivers, which stated that in an effort to reinforce and ensure compliance with Office of the Undersecretary of Defense (Personnel and Readiness) policy issued in 2009, the “enlistment or commissioning of any individual with a conviction or adverse adjudication for a felony or misdemeanor sexual offense is prohibited and no waivers are authorized.” The memorandum also suspends enlistment waivers in areas of major misconduct, positive drug/alcohol tests at military entrance processing stations, and misconduct or juvenile major misconduct for drug use, possession, or drug paraphernalia, to include marijuana. DOD agreed in its comments that not enlisting individuals with felony issues does shrink the pool of eligible recruits but stated that the accession mission for the ARNG has not been in jeopardy. However, as we note in this report, while ARNG met end-strength goals from fiscal years 2010 through 2014, the ARNG met overall recruiting goals in only 2 of the 5 years. We noted in our report that several factors have reduced the pool of qualified recruits, all while end-strength goals have remained constant. This in itself makes it inherently more difficult for recruiters to meet defined recruiting goals. Further, officials we interviewed during this engagement stated that their reduced ability to process waivers for certain law violations has made meeting the recruiting mission more difficult. We also note in our report that in June 2008 OSD issued a Directive-Type Memorandum (DTM) 08-018 – Enlistment Waivers, which established policy and provided guidance regarding enlistment waivers for applicants for the Military Services and provided standardized terminology for the tracking and reporting of waiver data to be implemented in fiscal year 2009. According to OSD officials, waiver data prior to fiscal year 2009 is inconsistent and unreliable. With regard to our seven recommendations, DOD concurred with all of them and described actions it plans to take to implement them. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Army, the Chief, NGB; and the Director, ARNG. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or at farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. The objectives of our review were to evaluate the extent to which (1) the Army National Guard (ARNG) has provided oversight of its recruiting process; (2) the ARNG met its goals for recruiting, completion of initial military training, and completion of initial term of service in recent years; and (3) the Office of the Secretary of Defense (OSD), Department of the Army, and ARNG have conducted their oversight responsibilities of the ARNG’s financial incentives programs. Also, House Report 113-446 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2015 included two additional provisions: 1) a provision for us to assess the extent to which contracting vehicles used to support ARNG recruiting were in compliance with Department of Defense (DOD) and Department of the Army policies and regulations, and 2) a provision for us to assess the numbers of individuals who complete basic and advanced individual training and the average length of time between when a person enlists in the ARNG and when the person completes initial military training. To address the first of the two additional provisions, we reviewed findings from and the status of recommendations by the U.S. Army Audit Agency and the Office of the Deputy Assistant Secretary of the Army for Procurement to improve contracting processes at the National Guard Bureau (NGB). A Deputy Assistant Secretary of the Army for Procurement official determined that this information is sensitive but unclassified so we provided this information separately to the committees. To address the second provision, we included results of our analyses in appendixes II, III, IV, and V. To describe the steps the ARNG has taken to provide oversight of the ARNG recruiting process, we obtained and reviewed guidance and policy documents regarding oversight of recruiter activities and interviewed officials from the ARNG. Although not generalizable to all states and territories, we selected a nongeneralizable sample of four states based on factors such as size, the total number of accessions, and geographic locations to understand and describe how states conduct oversight. We selected Texas and Pennsylvania as two states with a large-size ARNG end-strength, Virginia as a state with a medium-size end-strength, and Idaho as a state with a small-size end-strength. We obtained and reviewed applicable state and local recruiting and retention policy documents and interviewed recruiting and retention officials at each of these four selected states. The observations from these four selected states are not generalizable to all states and territories but provide important insight into ARNG oversight of its recruiting process. To determine the extent to which the ARNG met goals for recruiting, completion of initial military training, and completion of initial term of service, we obtained aggregate recruiting data and associated goals from the ARNG. We compared data on the ARNG’s annual recruiting goals for enlistments to the number of enlistments in the ARNG for fiscal years 2010 through 2014, respectively. We chose fiscal year 2010 as the start date because our prior work discussed the extent to which the ARNG met its recruiting goals from fiscal years 2000 through 2009. We chose fiscal year 2014 as the end date because it was the most recently available data at the time of our review. In addition, we compared ARNG’s goals for completion of initial military training to ARNG-reported completion rates in fiscal years 2011 through 2014. We could not assess the extent to which ARNG met its goal for completion of initial military training in fiscal years prior to 2011 because the goals in place for those years were not available or not comparable to the completion rates provided by the ARNG. We tried to analyze data on the reasons why soldiers did not complete their initial military training and when these soldiers separated during the training process; however, we found that states inconsistently recorded the reasons why soldiers left before beginning or prior to completion of training and that training data did not provide full visibility into when soldiers separated during the training process, as we discuss in greater detail in the report. Lastly, we compared ARNG’s fiscal year 2015 attrition goal for soldiers nearing the end of their initial term to the ARNG’s initial term attrition rate as of May 2015, which was the most recently available data at the time of our review. We could not assess the extent to which the ARNG met goals prior to fiscal year 2015 because ARNG officials stated that the goals have changed over time and could not provide goals for previous fiscal years. In addition to analyzing available data, we interviewed ARNG officials and officials from the four states we visited for their perspectives on trends and issues we identified in analyzing the data. To determine the extent to which OSD, the Department of the Army, and the ARNG have conducted oversight of the ARNG’s incentives programs, we obtained and analyzed relevant policy and guidance documents to identify oversight responsibilities for ARNG incentives programs. We interviewed officials from the ARNG to gain an understanding of how incentives policies and guidance are being applied. We interviewed officials from OSD, Department of the Army, and the ARNG to gain an understanding of how OSD, the Department of the Army, and the ARNG conduct oversight of ARNG incentives programs. To gain an understanding of how incentives are being implemented during recruiting and retention activities, we obtained and analyzed applicable state and local incentives policies and interviewed recruiting and retention officials from our four selected states. Although not generalizable to all states and territories, we selected a nongeneralizable sample of four states based on factors such as size, the total number of accessions, and geographic locations to understand and describe how states are implementing and utilizing incentives programs in the recruiting process. Further, to assess the number of individuals who complete basic and advanced individual training and the average length of time between when a person enlists in the ARNG and when the person completes initial military training, we obtained and analyzed data on enlisted soldiers to determine the extent to which they completed their initial military training. We elaborate on the results from our analysis in appendixes II and III and we provide additional analysis related to the length of time soldiers who did not meet their initial term of service stayed in the ARNG in appendix IV and the reasons why soldiers left the ARNG prior to completing their initial term of service in appendix V. We analyzed data from 365,431 non- prior-service enlisted soldiers who joined the ARNG from fiscal years 2004 through 2013 to determine the extent to which they completed their initial military training. We also calculated the length of time it took non- prior-service, non-split-option soldiers who enlisted during this time period to complete their initial military training and become qualified for their military occupational specialty. In addition, we analyzed data from 380,736 enlisted soldiers who joined the ARNG from fiscal years 2001 through 2007 to determine whether they completed their initial term of service. For those soldiers who joined during this time period but did not complete their initial term of service, we analyzed the length of time they stayed in the ARNG and the reasons for their separation. Because states inconsistently recorded the reason for soldiers who left the ARNG before completing training, we could only analyze the reasons for soldiers who completed training but left before the end of their initial term of service. To assess the reliability of the data used in this report, we analyzed the data for inconsistencies, incomplete data fields, and outliers. We also reviewed relevant documentation about the data systems and guidance provided to the states and territories on how to report recruiting and retention data. We followed up with the ARNG to discuss limitations we identified and requested revised data or made adjustments to our analysis, when possible, to mitigate these limitations. We noted any limitations in the report, where appropriate. Except in the case of reasons why soldiers left the ARNG before completing their military training and data on when soldiers separated during the training process, we found that the data were sufficiently reliable for the purposes of determining (1) the extent to which enlisted soldiers completed their initial military training; (2) the length of time it took these soldiers to complete their initial military training and become qualified for their military occupational specialty; (3) the extent to which enlisted soldiers completed their initial term of service; (4) the length of time enlisted soldiers who did not complete their initial term of service served in the ARNG; and (5) the reasons why enlisted soldiers who graduated from their training but did not complete their initial term of service left the ARNG. We conducted this performance audit from August 2014 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For individuals who joined the Army National Guard (ARNG) within a given fiscal year, we analyzed whether they completed their initial military training and found that the percentage of ARNG soldiers who completed their training generally increased annually from 63.7 percent in fiscal year 2004 to 80.5 percent in fiscal year 2013. ARNG Strength Maintenance Division officials attributed the improvements in completion of training largely to the Recruit Sustainment Program, which began in fiscal year 2005. The purpose of the Recruit Sustainment Program is to increase the likelihood that ARNG soldiers will graduate initial military training by ensuring that recruits are mentally prepared and physically fit prior to attending training. The program aims to provide recruits with realistic training that is similar to the first 3 weeks of basic training. In addition, recruiters stated that the Recruit Sustainment Program allows the ARNG to maintain contact with recruits while they wait to attend training and to monitor their conduct and educational progress to help ensure they stay eligible to join. Table 6 shows our analysis of the extent to which non- prior-service enlisted individuals who joined the ARNG from fiscal years 2004 through 2013 completed their initial military training. The average length of time for non-prior-service enlisted soldiers who joined the Army National Guard (ARNG) from fiscal years 2004 through 2013 to complete initial military training and become qualified for their military occupation for the top 15 military occupational specialties varied from 254 to 357 days. See table 7. . Enlisted soldiers who joined the Army National Guard (ARNG) from fiscal years 2001 through 2007 and did not complete their initial term of service typically left within the first 2 years of joining the ARNG. Figure 1 shows the length of time enlisted soldiers who joined from fiscal years 2001 through 2007 and did not complete their initial term of service stayed in the ARNG. We also analyzed the reasons why soldiers did not complete their initial term of service and found that soldiers left for a variety of reasons. Figure 2 shows the reasons why soldiers did not complete their initial term of service for those who joined from fiscal years 2001 through 2007. Description Establishes policy, assigns responsibilities, and provides procedures regarding the tracking and reporting of various recruiting-related data (including the tracking and reporting of enlistment waivers and tracking and reporting of recruiter irregularities). Establishes policies and assigns responsibilities for qualitative distribution of manpower accessions, and defines certain DOD quality measures for accessions. Governs eligibility criteria, policies, and procedures for enlistment and processing of persons into the Regular Army, the Army Reserve, and the ARNG, among other things. Dictates the different types of enlistment or accession programs for enlisted, officers, and warrant officers. Provides information on medical fitness standards for induction, enlistment, appointment, retention, and related policies and procedures. Establishes a single reference for incentives authorized within the ARNG and Army Reserve and establishes responsibilities for Department of the Army regarding incentives within the ARNG and Army Reserve. Integrates all of the recruiting and retention programs, policies, and procedures necessary for developing, implementing, and monitoring a successful strength maintenance program at the state or territory level. Governs policies and procedures for the administration of the ARNG SRIP Programs. Recommendation To provide greater understanding of the recruiting and retention issues and improve the department’s oversight for these issues, the Secretary of Defense should direct the Under Secretary of Defense for Personnel and Readiness, in concert with the Assistant Secretary of Defense for Reserve Affairs, to require the 10 components to report annually on all (not just critical) over- and underfilled occupational specialties, provide an analysis of why occupational specialties are over- and underfilled, and report annually on and justify their use of enlistment and reenlistment bonuses provided to servicemembers in occupational specialties that exceed their authorized personnel levels. To provide greater understanding of the recruiting and retention issues and improve the department’s oversight for these issues, the Secretary of Defense should direct the Under Secretary of Defense for Personnel and Readiness, in concert with the Assistant Secretary of Defense for Reserve Affairs, to develop a management action plan that will help the components to identify and address the root causes of their recruiting and retention challenges. Should the Army decide to offer incentives to officers in the future, the Secretary of Defense should direct the Secretary of the Army to build on currently available analyses that will enable the Army, with the direction and assistance of the Secretary of Defense, to set cost-effective bonus amounts and other incentives. To enable the most efficient use of recruiting resources, the Secretary of Defense should direct the Secretary of the Army to collect data on the cost- effectiveness of the Army’s conduct waiver polices— including costs associated with the waiver review and approval process and with future separations of soldiers with conduct waivers for adverse reasons— and use these data to inform the Army’s waiver policies. To enhance its existing processes to recruit and retain sufficient numbers of enlisted personnel and to avoid making excessive payments to achieve desired results, the Secretary of Defense should direct the Secretary of the Army to build on currently available analyses that will enable the Army to set cost-effective enlistment and reenlistment bonuses. Recommendation To enable the Army to make informed decisions regarding the management of its officer corps over time, the Secretary of Defense should direct the Secretary of the Army to track—and if necessary correct—any effects that its actions to alleviate shortages may have on the officer corps, particularly in cases in which the Army has deviated from benchmarks established in the Defense Officer Personnel Management Act. The Secretary of Defense should direct the Secretaries of the Army and Navy to identify mechanisms for the regular sharing of the recruiter irregularity data throughout all levels of command. The Secretary of Defense should direct the Under Secretary of Defense for Personnel and Readiness to complete and issue the instruction on tracking and reporting data on recruiter irregularities to clarify the requirements for the types of recruiter irregularities to be reported and the placement of recruiter irregularity cases and actions taken into reporting categories. The Secretary of Defense should direct the Under Secretary of Defense for Personnel and Readiness to direct the relevant offices within the National Guard Bureau to adjust their reporting procedures in ways that will provide transparency in the data reported to OSD and any limitations on the data. The Secretary of Defense should direct the Under Secretary of Defense for Personnel and Readiness to include the appropriate disclosures concerning data limitations in the recruiter irregularity reports that OSD produces on the basis of the National Guard data for the Congress and others. In addition to the contact named above, Vincent Balloon, Assistant Director; Amie Lesser; Richard Powelson; Terry Richardson; Christine San; Jared Sippel; Norris “Traye” Smith; Sabrina Streagle; Elizabeth Van Velzen, and Michael Willems made key contributions to this report. Military Recruiting: Clarified Reporting Requirements and Increased Transparency Could Strengthen Oversight over Recruiter Irregularities. GAO-10-254. Washington, D.C.: January 28, 2010. Military Personnel: Army Needs to Focus on Cost-Effective Use of Financial Incentives and Quality Standards in Managing Force Growth. GAO-09-256. Washington, D.C.: May 4, 2009. Military Personnel: DOD Needs Action Plan to Address Enlisted Personnel Recruitment and Retention Challenges. GAO-06-134. Washington, D.C.: November 17, 2005. Military Attrition: DOD Needs to Better Analyze Reasons for Separation and Improve Recruiting Systems. GAO/T-NSIAD-98-117. Washington, D.C.: March 12, 1998. Military Recruiting: DOD Could Improve Its Recruiter Selection and Incentive Systems. GAO/NSIAD-98-58. Washington, D.C.: January 30, 1998. Military Attrition: DOD Could Save Millions by Better Screening Enlisted Personnel. GAO/NSIAD-97-39. Washington, D.C.: January 6, 1997.
Recruiters are often referred to as the “face” of the ARNG. In the past, there have been allegations of recruiter misconduct and misuse of financial incentives, making it important for recruiters to ensure procedures are followed when working with applicants and that incentives to join the ARNG are awarded properly and effectively. House Report 113-446 included a provision for GAO to review the ARNG's recruiting practices. This report evaluates the extent to which (1) ARNG has provided oversight of its recruiting process; (2) ARNG met its goals for recruiting, completion of initial military training, and initial term of service; and (3) OSD, Department of the Army, and ARNG have conducted oversight of ARNG's enlistment financial incentives programs. For this work, GAO reviewed DOD and ARNG recruiting policy and procedures and interviewed cognizant officials. GAO analyzed data on recruiting from FY2010 through FY2014, training from FY2011 through FY2014, and initial term of service for FY2015. GAO visited four states representing a range of size and locations. The Army National Guard (ARNG) has taken steps to increase oversight of its recruiting process primarily conducted by recruiters dispersed at the state-level but has not established a permanent program to monitor state-level recruiting activities. In June 2014, the ARNG created a Recruiting Standards Branch that has started to conduct inspections of state offices. The Recruiting Standards branch completed inspections in 16 states from October 2014 through July 2015 and found that 2 states did not achieve full compliance in their inspections. However, this is not a permanent program, and ARNG officials stated that they are using positions to staff it intended for use in other areas. The ARNG is seeking approval for permanent staff by early 2017 to continue its oversight. Continued monitoring of state-level recruiting activities, such as through a permanent recruiting standards branch, will be important to ARNG's oversight functions. The ARNG had mixed results in meeting its overall recruiting goals and nearly met its goals for initial military training; however, the ARNG does not track whether soldiers are completing their initial term of service or military obligation. The ARNG met its recruiting goals in 2 of the 5 years from fiscal years (FY) 2010 through 2014. While the ARNG nearly met its goals for training completion from FY 2011 through 2014, GAO found that the ARNG does not have complete, consistent, and valid data on why soldiers do not complete training and when they separate during training. Without consistent, complete, and valid data, decision makers do not have information to determine why a higher number of soldiers are not completing training. The ARNG also does not track whether soldiers are completing their initial term of service. GAO's analysis shows that about 40 percent of enlisted soldiers who joined the ARNG from FY 2001 through 2007 did not complete their initial term of service. Without tracking completion of initial term of service, ARNG officials cannot assess whether their programs are effective in meeting personnel requirements and do not have visibility to ensure the ARNG is maximizing its investment in its soldiers. The Office of the Secretary of Defense (OSD), Department of the Army (Army), and ARNG have not fully conducted their oversight responsibilities of ARNG enlistment financial-incentives programs. OSD has not enforced a requirement that ARNG report incentives obligated through the ARNG incentives programs. Further, although Army and National Guard regulations require evaluations of the effectiveness of the ARNG financial incentives programs, the Army and ARNG have not evaluated and documented the effectiveness of the programs. Without evaluating and documenting the effectiveness of ARNG incentives programs, officials may not know whether changes are needed for effective use of incentives or they may determine that certain financial incentives are not needed. Moreover, the ARNG has not ensured that recruiters have an understanding of available financial incentives. Financial incentives are a tool available to recruiters and agency policy states that incentives are available to assist in meeting and sustaining readiness requirements and to assist in filling critical shortages. ARNG has not provided recruiters with training on using financial incentives. With additional training, recruiters could better understand when and how to offer financial incentives to fill critical positions. GAO recommends, among other things, that ARNG take actions to collect consistent, complete, and valid data on soldiers who do not complete training and initial term of service, and evaluate and document its incentives programs. DOD concurred with GAO's recommendations but stated that it did not concur with the report due to GAO's depiction of waivers. GAO disagrees with DOD's characterization as discussed in the report.
The DRC is a vast, mineral-rich nation with an estimated population of about 75 million people and an area that is roughly one-quarter the size of the United States, according to the UN. The map in figure 1 shows the DRC’s provinces and adjoining countries. Since its independence in 1960, the DRC has undergone political upheaval, including a civil war, according to State. In particular, the eastern DRC has continued to be plagued by violence often perpetrated by illegal armed groups and some members of the Congolese national military against civilians. In November 2012, M-23, an illegal armed group, occupied the city of Goma and other cities in eastern DRC and clashed with the Congolese national army. During this time, the UN reported numerous cases of sexual violence against civilians, including women and children, that were perpetrated by armed groups and some members of the Congolese national military. Although M-23 eventually withdrew from the cities, the group’s presence in the region continued. In December 2012, the Ugandan president began to broker peace talks, known as the Kampala Dialogue, between M-23 and the DRC government, aimed at reaching a final and principled agreement that ensured the disarmament and demobilization of M-23 and accountability for human rights abuses. The M-23 was defeated in November 2013 by the Congolese national military with support from UN forces. In December 2013, the former M-23 and the DRC each signed individual declarations that, among other things, set out the conditions for the disarmament, demobilization, and reintegration of M23 into Congolese society and called for those responsible for war crimes and crimes against humanity to be held accountable. Prior to the defeat of M-23, in February 2013, the 11 countries in the region adopted the “Peace, Security and Cooperation Framework for the Democratic Republic of the Congo and the Region.” Some of the adjoining countries in the region have also experienced recent turmoil, which has led to flows of large numbers of refugees into the DRC in addition to internally displaced persons. The United Nations High Commissioner for Refugees (UNHCR) estimated, as of mid 2013, that there were close to 50,000 refugees from the Central African Republic, in addition to over 120,000 refugees from other countries, as well as around 2.6 million internally displaced persons living in camps or with host families in the DRC. Congress has focused on issues related to the DRC for almost a decade. In 2006, Congress passed the Democratic Republic of Congo Relief, Security, and Democracy Promotion Act of 2006, stating that U.S policy is to engage with governments working for peace and security throughout the DRC and hold accountable individuals, entities, and countries working to destabilize the government. In July 2010, Congress passed the Dodd- Frank Act, which included several provisions in section 1502 of the Act concerning conflict minerals in the DRC and adjoining countries. The Act directs State, USAID, SEC, and Commerce to take steps on matters related to the implementation of those provisions (see text box). Provisions in the Dodd-Frank Act Related to Conflict Minerals in the DRC and Adjoining Countries Section 1502(a) states that “it is the sense of the Congress that the exploitation and trade of conflict minerals originating in the Democratic Republic of the Congo is helping to finance conflict characterized by extreme levels of violence in the eastern Democratic Republic of the Congo, particularly sexual- and gender-based violence, and contributing to an emergency humanitarian situation therein, warranting the provisions of section 13(p) of the Securities Exchange Act of 1934, as added by subsection (b).” Section 1502(b) requires SEC, in consultation with State, to promulgate disclosure and reporting regulations regarding the use of conflict minerals from DRC and adjoining countries. Section 1502(c) requires State and USAID to develop, among other things, a strategy to address the linkages among human rights abuses, armed groups, the mining of conflict minerals, and commercial products. Section 1502(d) requires that Commerce report, among other things, a listing of all known conflict minerals processing facilities worldwide. In addition, in July 2013, the United States appointed the current Special Envoy for the Great Lakes Region and the DRC, whose office develops and leads the implementation of U.S. regional policy on cross-border security, political, economic and social issues. The Special Envoy leads U.S. efforts to support the implementation of the Peace, Security, and Cooperation Framework Agreement, including the development and implementation of a comprehensive strategy to stop human suffering and violence in the region, by promoting political, economic, and social reconciliation. In 1999, the UN Security Council authorized peacekeeping operations in the DRC, known as the UN Organization Mission in Democratic Republic of the Congo (MONUC). MONUC’s mission included achieving a ceasefire and protecting civilians and other nonmilitary personnel from threats of physical violence. In 2010, MONUC was replaced by the UN Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO), whose priorities also include protecting civilians and stabilizing the country. The international community has also responded to the conflict in the DRC and adjoining countries by appointing special envoys to the region. For example, in March 2013, the UN appointed a Special Envoy of the Secretary-General for the Great Lakes Region to support the implementation of the 11-nation Peace, Security and Cooperation Framework for the Democratic Republic of the Congo and the Region. According to the UN, the envoy’s key tasks include undertaking good offices to strengthen the relations between the signatories of the framework, revitalizing existing accords and coordinating the international engagement. objectives of the new force based in North Kivu province are to neutralize armed groups, reduce the threat they pose to state authority and civilian security, and make space for stabilization activities. In addition, the European Union (EU) is exploring possible legislation related to conflict minerals and responsible sourcing. European Commission release, in March 2014, the EU proposed a draft regulation setting up an EU system of self-certification for importers of tin, tantalum, tungsten, and gold for imports into the EU. The draft regulation indicated that the self-certification would align with the Organization for Economic Cooperation and Development’s (OECD) “OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas,” which includes a five-step framework for risk-based due diligence in the supply chain. According to the release, the regulation gives EU importers an opportunity to deepen ongoing efforts to ensure clean supply chains when trading legitimately with operators in conflict-affected countries. Canada also has a proposed conflict minerals initiative. According to State, Canada’s Conflict Minerals Act was reintroduced for discussion in the Canadian parliament in April 2014 and would require Canadian companies to exercise due diligence in respect of the exploitation and trading of designated conflict minerals originating in the Great Lakes Region of Africa. Uses of Conflict Minerals Various industries, particularly manufacturing industries, use the four conflict minerals in a wide variety of products. For example, tin is used to solder metal pieces and is also found in food packaging, in steel coatings on automobile parts, and in some plastics. Most tantalum is used to manufacture tantalum capacitors, which enable energy storage in electronic products such as cell phones and computers as well as used to produce alloy additives, which can be found in turbines in jet engines. Tungsten is used in automobile manufacturing, drill bits and cutting tools, and other industrial manufacturing tools and is the primary component of filaments in light bulbs. Gold is used as reserves and in jewelry and is also used by the electronics industry. Supply chains for companies using tin, tantalum, tungsten, and gold generally begin at the mine site, where ore is extracted from the ground with mechanized or artisanal mining techniques. However, these supply chains can be complex and vary considerably, according to some industry association and company representatives. For example, as figure 2 shows, in the “upstream” segment of the supply chain—that is, from mine to smelter—ore may be purchased by a local processor or trader and then by an exporter, who ships it to a smelter for refinement; in other cases, the ore may be sold directly to an exporter. The “downstream” segments of conflict mineral supply chains—that is, from smelter to manufacturer—may vary as well, depending in part on the type of mineral. Figure 2 provides a simplified depiction of the supply chain for the four conflict minerals. Smelters and refiners are considered the “choke points” in the supply chain, since a limited number of smelters and refiners process conflict minerals worldwide and the origin of the minerals after processing can be difficult to verify. Smelters primarily provide high-purity tin, tantalum, and tungsten directly to component parts manufacturers, although some sell high-purity metals through traders or exchanges. Gold refiners typically sell high-purity gold to banks for use as a store of value or to international exchanges where gold is bought and sold, although some refiners sell gold directly to manufacturers; banks and traders may also sell gold to manufacturers, including jewelry and component parts manufacturers. Component parts manufacturers use the refined tin, tantalum, tungsten, or gold to construct individual parts—such as capacitors, engine parts, or clasps for necklaces—that they sell to original equipment manufacturers. The original equipment manufacturers complete the final assembly of a product and sell the final product to the consumer. Global and In-Region Sourcing Initiatives Global sourcing initiatives may minimize the risk that minerals that have been exploited by illegal armed groups will enter the supply chain and may also support companies’ efforts to identify the source of the conflict minerals across the supply chain around the world. In-region sourcing initiatives may support responsible sourcing of conflict minerals from Central Africa and the identification of specific mines of origin for those minerals. Such initiatives in DRC and adjoining countries focus on tracing minerals from the mine to the mineral smelter or refiner by supporting a bagging and tagging program or some type of traceability scheme. Various stakeholders—including governments, industry associations, international organizations, and international and local NGOs working in the Great Lakes Region—operate or support initiatives to promote and exercise responsible sourcing of conflict minerals. Stakeholder-developed initiatives—which include the development of guidance documents, audit protocols, and sourcing practices—support efforts by companies reporting to SEC under the rule to (1) conduct due diligence of their conflict minerals supply chain, (2) identify the source of conflict minerals within their supply chain, and (3) responsibly source conflict minerals. The initiatives can be divided into two categories: global or in-region. Most responsible sourcing initiatives follow OECD’s due-diligence guidance. minerals or metals from conflict-affected and high-risk areas and, according to OECD, is one of the only international frameworks available to help companies meet their due diligence reporting requirements. Organization for Economic Cooperation and Development, OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas: Second Edition (Paris: November 2012). Since the Act was passed in 2010, State, USAID, SEC, and Commerce have undertaken activities related to implementation of the Act’s conflict minerals provisions, including activities related to responsible sourcing of such minerals from the DRC and adjoining countries. As required by the Act, State and USAID developed a strategy in 2011 aimed at addressing the linkages between human rights abuses, armed groups, the mining of conflict minerals, and commercial products and are implementing various objectives of the strategy. State also produced a map of mineral-rich areas under control of armed groups in the DRC. SEC issued its required conflict minerals rule in 2012. As of May 2014, Commerce had taken steps toward producing a list of all conflict minerals processing facilities worldwide, which the Act required by January 2013, but had not completed the task. Moreover, Commerce had not developed a plan of action with associated timeframes for how and when it expects to complete this effort and report to Congress. Standard practices in program and project management and execution include, among other things, developing a plan to execute specific projects needed to obtain defined results within a specific time frame. State, USAID, SEC, and Commerce also have engaged in activities involving stakeholder partnerships and outreach and have provided technical assistance to other governments related to activities focused on the responsible sourcing of conflict minerals. The Act directed State, USAID, SEC, and Commerce to undertake various activities to implement its provisions related to conflict minerals. Since the Act was passed in 2010, the agencies have taken the following actions. Responding to the Act, State and USAID developed a strategy in 2011 to address the linkages among human rights abuses, armed groups, the mining of conflict minerals, and commercial products. The Act required State and USAID to submit, by January 2011, a strategy to address the linkages between human rights abuses, armed groups, mining of conflict minerals, and commercial products. The strategy document that State and USAID submitted to Congress in 2011 lists five objectives: (1) promote an appropriate role of security forces, (2) enhance civilian regulation of minerals trade in the DRC, (3) protect artisanal miners and local communities, (4) strengthen regional and international efforts, and (5) promote due diligence and responsible trade through public outreach. The strategy includes activities corresponding with each of these objectives—for example, building the capacity of civilian mining authorities in the DRC to certify mine sites, supporting the implementation and coordination of certification and traceability schemes, building the capacity of the ICGLR related to mineral audit mechanisms, and engaging with industries and civil society groups regarding supply chain due diligence efforts. State and USAID officials indicated that they have been implementing objectives of the strategy over the past few years. According to the U.S. Special Envoy for the Great Lakes Region and the DRC, the strategy remains relevant and accurate and State has used it in conjunction with other U.S. government agencies as a roadmap for efforts to help break the link between armed groups and conflict minerals. In addition, in 2011, State developed a map of mineral-rich zones and areas under control of armed groups in the DRC and has subsequently published several updated maps, as required by the Act. The maps are focused on the exploitation of tin, tantalum, tungsten, and gold in the provinces of North and South Kivu and parts of Orientale, Maniema, and Katanga provinces. According to State, the most current map, which State published in February 2014, was based on data from surveys conducted in 2013 by the International Peace Information Service (IPIS)—an NGO— and on information from consultations with the DRC government, the UN Group of Experts, and MONUSCO (see app. II for the 2014 map). State reported that lack of complete or fully verifiable data makes it difficult to confirm the location of many mine sites, to establish which mine sites are active at any given time, and to comprehensively verify reports of armed groups or other entities that are either present at mines or have access to revenue streams emanating from them. State officials indicated that in the future the map may become digital rather than paper based. As we previously reported, SEC issued its conflict minerals rule in August 2012. The Act required that SEC promulgate, by April 2011, disclosure and reporting regulations regarding the use of conflict minerals from the DRC and adjoining countries. SEC issued a “frequently asked questions” (FAQ) document in May 2013 to address questions by companies that will have to report to SEC under the conflict minerals rule. SEC officials indicated that these FAQs included questions posed most often by companies regarding interpretation of the rule. In April 2014, SEC issued additional FAQs addressing questions that mostly pertained to the independent private sector audit of companies’ conflict minerals disclosure reports. According to an SEC official, these FAQs were based on interpretive questions asked by SEC-reporting companies and the audit community. In January 2014, SEC made “Form SD,” a specialized disclosure form for reporting compliance with the conflict minerals rule, available for electronic filing. The form, originally published with the conflict minerals rule, provides general instructions to SEC-reporting companies for filing the conflict minerals disclosure and specifies the information that their conflict minerals reports must include. SEC-reporting companies were required to file under the rule for the first time by June 2, 2014, and annually thereafter on May 31. According to SEC officials, based on preliminary feedback they received, they anticipated that most SEC- reporting companies subject to the rule would be unable to determine whether or not their products qualified as “DRC conflict-free.” More than a year after the deadline required by the Act, Commerce has not yet fulfilled its mandate under section 1502 of the Act. Section 1502 directed Commerce to report, among other things, a list of all known conflict minerals processing facilities worldwide to appropriate congressional committees annually starting no later than 30 months after the Act’s enactment—that is, by January 2013.Commerce had not developed such a list or developed a plan of action, with associated time frames, for completing this requirement and reporting it to Congress. Standard practices in program and project management include developing, among other things, a program plan to execute specific projects needed to obtain defined programmatic results within a specific time frame. In January 2014, Commerce officials told us that they had identified entities that they hoped would help them identify publicly available information about conflict minerals and identify stakeholders who are knowledgeable about conflict minerals issues. Specifically, Commerce officials indicated that they had assembled a proposed and internal outreach plan, which includes meeting with stakeholders to discuss how these organizations have gathered information on conflict mineral smelters and identifies other efforts that Commerce can explore to develop the list of conflict minerals processing facilities. Commerce officials also indicated that they anticipated a 3- to 4- month time frame for the proposed outreach efforts to talk to stakeholders. In May 2014, Commerce officials stated that they had completed discussions with the majority of the stakeholders identified in the outreach plan and have developed several preliminary lists of conflict minerals processing facilities, based on information they obtained from the stakeholders. However, Commerce officials stated that they did not have a timeframe for completing the final list for Congress. As of May 2014, Commerce officials said that they had encountered some challenges associated with gathering data on conflict minerals to help inform their outreach plan and required reporting. For example, according to the officials, conflict minerals and mining operations are difficult to track; because the equipment used to process conflict minerals can be moved easily, such operations can emerge in different locations. In addition, Commerce officials mentioned that some conflict minerals data may be inaccessible to the U.S. government because a large number of conflict mineral smelters are in China.timeframes could better position Commerce to report on the status of its efforts to compile a list of conflict minerals processing facilities worldwide and to hold its personnel accountable for completing its related activities. Having an action plan with associated Some stakeholders that we contacted, including government and industry officials and representatives of the UN Group of Experts and an NGO, indicated that a comprehensive list of conflict minerals smelters and refiners—considered the “choke point” of the supply chain—would be very useful in the effort to ensure responsible sourcing of minerals in the DRC and adjoining countries. According to these stakeholders, such a list would enable companies that are subject to the SEC rule to maintain transparency regarding their supply chains, particularly in their communications with smelters, and would also provide companies the information they need for their SEC-required conflict minerals disclosure reports. U.S. government agencies have engaged in a variety of activities that involve partnerships and coordination with other stakeholders or outreach to stakeholders, and some agencies have provided technical assistance to stakeholders regarding responsible sourcing of conflict minerals. Some agencies’ activities contribute to global and in-region responsible sourcing initiatives and some of the activities address the implementation of objectives outlined in the strategy to address the linkages between human rights abuses, armed groups, and the mining of conflict minerals. Some U.S. agencies have partnered and coordinated with other stakeholders—other government agencies, industry, and civil society— regarding issues related to responsible sourcing of conflict minerals. For example: State and USAID, both in headquarters and posts or missions overseas, and other U.S. agencies coordinate with one another on weekly or biweekly conference calls to discuss the progress of responsible sourcing efforts, provide updates on recent events, and collaborate on future events, according to State and USAID officials. USAID works in a collaborative and coordinated manner with State in Washington and regionally, using the 2011 U.S. strategy as a framework for the coordination, according to USAID officials. The officials indicated that funding also has been coordinated between the two agencies across the five objectives of the strategy and totals over $25 million, as of 2013. State and USAID coordinate with other stakeholders through the Public-Private Alliance for Responsible Minerals Trade (PPA) to fund and support organizations working on responsible sourcing efforts. Both State and USAID are on the PPA’s Governance Committee, which consists of participants from foreign governments, industry, and civil society. USAID has partnered with the International Organization for Migration to help enhance civilian control of the DRC’s mineral trade through infrastructure improvements and institutional reforms, according to agency officials. The officials reported that with USAID funding, the organization will also establish pilot certification and traceability systems in and around the mineral trading centers and other areas of South and North Kivu. USAID has partnered and coordinated with stakeholders in the DRC, according to agency officials. For example, the officials said that USAID is coordinating with the DRC government regarding various aspects of minerals trade, is involved in the multi-stakeholder Mining Thematic Group in the DRC, and facilitates the Eastern Congo Mining Coordination Team. On a multilateral level, both USAID and State participate in the OECD Responsible Sourcing Stakeholder Forums held every 6 months, according to USAID officials. This forum, coordinated by the ICGLR, OECD, and the UN Group of Experts, is a platform for governments, the private sector, international organizations, and civil society to share experiences with implementation of supply chain due diligence for responsible sourcing of minerals from conflict-affected and high- risk areas. Both State and USAID officials have participated at times as facilitators of these forums. The current U.S. Special Envoy for the Great Lakes Region and the DRC has collaborated with other stakeholders, such as the UN Special Envoy for the Great Lakes, the African Union, and other multilateral and bilateral partners, to strengthen international coordination mechanisms on the crisis in the Great Lakes, according to State. These efforts have taken place under the Peace, Security, and Cooperation Framework Agreement for the DRC and the Region. A couple of U.S. government agencies indicated that they have conducted outreach to various stakeholders to promote responsible sourcing of conflict minerals and to obtain information about conflict mineral sourcing and supply chains. For example, State and Commerce officials reported the following. State officials told us that State has engaged with foreign governments and industry associations regarding the Dodd-Frank Act requirements. According to these officials, State’s efforts have included sending letters about section 1502 of the Act to foreign governments that are prominent in the conflict mineral supply chain as well as encouraging these governments and companies in those countries to support the aim of the legislation. In a November 2013 briefing, the Deputy Assistant Secretary of State for Counter Threat Finance and Sanctions reported that he had travelled to Asia and Europe to talk with representatives from smelters and governments about responsible sourcing initiatives and encourage participation in such initiatives. State officials also indicated that they have facilitated outreach efforts for industry associations, such as the Conflict-Free Smelter Initiative (CFSI) and others, to help them secure meetings in Asian countries to discuss conflict free mining and smelting. According to the State officials, during the outreach some members from industry and industry associations expressed interest in talking to Commerce about smelters and support for responsible sourcing, according to State officials. Commerce officials stated that their proposed outreach plan identified entities that could enable them to develop the list of conflict mineral smelters and refiners required by the Act. According to these officials, they have conducted outreach to these entities, including government agencies, industry associations, international organizations, and NGOs. Some U.S. agencies have provided technical assistance related to responsible sourcing of conflict minerals to various stakeholders. According to agency officials, these stakeholders have consisted primarily of other governments, particularly in the Great Lakes Region. For example: State officials said that they had shared experiences and challenges related to implementing the Act with officials from the EU who were working on proposed conflict minerals legislation. SEC officials stated that they had discussed with EU officials issues that SEC considered when drafting the conflict minerals rule as well as questions about the rule that SEC received from industry. USAID officials stated that they had been working with the ICGLR in providing technical assistance on conflict minerals programs, particularly through the steering committee for ICGLR’s Regional Initiative against the Illegal Exploitation of Natural Resources. Specifically, according to USAID officials, USAID has been implementing a multiyear institutional capacity program in support of the ICGLR to build the overall strength of the Executive Secretariat as well as the ICGLR’s regional initiative. USAID officials said that the agency will soon begin implementing activities to support a third-party supply chain audit mechanism and an independent conflict minerals supply chain auditor. Since we reported in July 2013, stakeholders have expanded existing initiatives and added new initiatives focused on responsible sourcing of conflict minerals in the DRC and adjoining countries, to include new mine sites, countries, and smelters. Some of these initiatives have yielded publically available information, including data on production of conflict- free minerals and export data, as well as reports on the progress and results of the initiatives. However, this information is limited in scope and thus may not provide a comprehensive description of the sourcing of conflict minerals from the DRC and adjoining countries. Stakeholders have recently expanded, or made plans to expand, a number of existing global and in-region responsible sourcing initiatives; and two new initiatives are underway. Figure 3 shows the starting dates for existing, expanding, and new responsible sourcing initiatives. According to some stakeholders we interviewed, improvement in security in eastern DRC and industries’ growing awareness of responsible sourcing and the Act’s requirements may account for the expansion of responsible sourcing initiative. The following are examples of global responsible-sourcing initiatives that stakeholders have expanded since we reported in July 2013. The Conflict-Free Sourcing Initiative (CFSI) has expanded in several aspects related to responsible sourcing. First, CFSI’s Conflict-Free Smelter Program has expanded the number of smelters it has certified as conflict free. The program is a voluntary one in which smelters undergo an independent third-party audit, in accordance with OECD’s due diligence guidelines, to verify the origin of minerals processed at their facilities. The number of smelters that the program has certified as conflict-free has expanded from 26 smelters in summer 2013 to 85 smelters as of April 25, 2014 (see table 1). An additional 25 smelters are in the process of being certified, bringing the total number of smelters involved in the program to 110. As of January 2014, the Conflict-Free Smelter Program has expanded to include smelters for tungsten in addition to the other three conflict minerals. Second, according to CFSI representatives, through outreach to industry, CFSI has expanded its collaboration with companies involved with the conflict minerals supply chain. CFSI’s outreach includes twice-yearly workshops on conflict minerals issues that are open to all participants. According to CFSI, outreach such as these workshops bring together hundreds of representatives from industry, government, and civil society for updates, in-depth discussions, and guidance on best practices for responsible mineral sourcing. CFSI officials stated that such outreach recently resulted in collaboration with the tungsten industry, which led to certification of the first conflict-free tungsten smelter in 2014. Third, in 2014, CFSI began offering its members information about the SEC-required “reasonable country of origin” data for conflict minerals, providing the most detailed information currently available about the source of conflict minerals for smelting and refining facilities that are validated through the Conflict-Free Smelter Program. According to CFSI, this information may be useful to companies as they prepare the conflict minerals disclosure reports required by the SEC rule and demonstrate conformance with the OECD due diligence guidelines. In January 2012, the London Bullion Market Association (LBMA), which represents the global market for gold and silver, finalized and published its Responsible Gold Guidance to ensure that the gold refiners it accredits purchase only conflict-free gold.required to complete an annual third-party audit to verify their compliance with the LBMA guidance, according to an LBMA official. As of March 2014, of the 67 gold refiners that LBMA oversees, more than three- quarters had successfully submitted their audits and received the Responsible Gold Certificate, according to the official. The representative stated that if a refiner does not submit a third-party audit by the end of The refiners accredited by LBMA are 2014, the refiner will be removed from LBMA’s list of accredited refiners. An LBMA official said that the association also collaborates with other responsible sourcing stakeholders and global gold exchanges and works closely with OECD. For example, the official said that, working through OECD, LBMA has met with Chinese industry representatives to clarify the purpose and benefits of conducting due diligence audits of their refiners. Responsible Jewellery Council Chain-of-Custody Certification Program The Responsible Jewellery Council—a diamond and precious metals industry association—launched a chain-of-custody certification program in March 2012 to help its member companies identify and track conflict-free gold throughout their supply chains. The program’s requirements, which are aligned with the OECD Due Diligence Guidance for gold, include a third-party audit of each certified entity to ensure that its gold is conflict- free, according to the Responsible Jewellery Council. According to an official with the Responsible Jewellery Council, this certification can support companies’ compliance with the Dodd Frank Act. As of April 2014, nine entities had been validated under the council’s certification program and more entities were in the process of being certified, according to the official. The following are examples of in-region responsible sourcing initiatives that stakeholders have expanded, or made plans to expand, since we reported in July 2013. ITRI Tin Supply Chain Initiative The ITRI Tin Supply Chain Initiative (iTSCi) recently announced that it is expanding its in-region operations. The initiative works with “upstream” entities (i.e., companies involved in the conflict minerals supply chain from mine to smelter) in instituting the actions, structures, and processes necessary to conform with the OECD Due Diligence Guidance and helps relevant U.S. companies report on their due diligence efforts to the SEC as required by the Dodd-Frank Act. The assistance that iTSCi provides includes a system to trace bags of minerals from the mines to the exporter, due diligence audits of iTSCi’s member companies, and assessments of the political and security situations, which have been conducted at various mine sites in the DRC and Rwanda. In February 2014, iTSCi announced that it was expanding its traceability program into a remote area in the northern region of the Maniema province of the DRC, and into the North Kivu province of the DRC. According to iTSCi, improved security in North Kivu, which has a history of armed conflict, accounts in part for the expansion into the province. An iTSCi official stated that the initiative is currently looking at options for extending into South Kivu. Additionally, in April 2014, iTSCi announced that the program had started operations in Burundi. According to iTSCi, there is presently little evidence of activity by nongovernment armed groups in Burundi, since there have been no reports that armed groups are controlling mine sites or transportation routes, extorting money or minerals, or illegally taxing the trade of minerals. iTSCi further reported that it may extend the program to Uganda and eventually to the entire Great Lakes Region. Also, an iTSCi official stated that the initiative had successfully piloted technology in Rwanda to collect and manage data on conflict minerals electronically, which would replace the current paper- based system and increase efficiency of data collection. Launched by Motorola Solutions and AVX in 2011, the Solutions for Hope tantalum program is a “closed-pipeline” initiative that traces the flow of tantalum from the mine to the end-use company. In June 2013, Solutions for Hope reported that it had completed six shipments of tantalum, totaling more than 145 metric tons, from the Katanga province in the DRC. According to officials, in part because of improved security in the province, the initiative started sourcing tantalum from North Kivu in March 2014. Officials also noted that Solutions for Hope is exploring a closed pipeline system for gold in the DRC. The Conflict-Free Tin Initiative (CFTI), a multistakeholder effort supported by the Netherlands government, is a closed-pipeline initiative, similar to Solutions for Hope, started in October 2012 for sourcing tin from the South Kivu province of the DRC. According to CFTI, the initiative has expanded its mining operation to Maniema, a province bordering South Kivu, which is less prone to conflict and the government is reinvesting tax income into the mining communities. According to USAID, a CFTI stakeholder, from October 2012 to December 2013, the initiative generated a total export value of more than $3 million. In 2010, the International Conference on the Great Lakes Region (ICGLR) began working with an NGO to develop a regional certification mechanism to ensure that conflict minerals are fully traceable. ICGLR’s Regional Certification Mechanism (RCM) enables member countries and their mining companies to demonstrate where and under what conditions minerals were produced, allowing member governments to issue ICGLR regional certificates for those mineral shipments that are in compliance with the standards of the mechanism. The ICGLR issued its first certificate in November 2013 to a mine in Rwanda. According to an ICGLR official, the DRC launched its certificate program but had not yet issued any certificates as of November 2013. He added that Tanzania and Burundi may be able to issue certificates by the end of 2014. The ICGLR official noted several challenges in instituting the RCM in the DRC and the region. For example, he cited that it is logistically difficult to catalogue all mines in each country. In addition, the official noted that training local officials to use the RCM software is difficult and time consuming. The official added that it takes member countries 1 year to prepare for all components associated with launching the RCM. In the past year, one existing stakeholder has launched a new in-region responsible sourcing initiative and a new stakeholder has established an initiative. The German government’s Federal Institute for Geosciences and Natural Resources (BGR), an existing stakeholder, launched a new initiative in the past year. According to a representative, BGR’s primary role is to support the region’s governments and to build government capacity. Since 2013, BGR has initiated the Analytical Fingerprint Project to allow for independent verification of the origin of the conflict mineral by comparing the composition of tantalum, tin, and tungsten concentrate samples of a known origin with unknown samples, similar to a DNA test. According to a BGR official, the project has three types of units—sample preparation labs, high tech labs, and a management unit. To date, BGR has established sample preparation labs, located in Rwanda since 2013 and in the DRC since 2014, where mineral ore samples are prepared for analysis. A third sample preparation lab is under construction in Burundi. BGR is in negotiations to establish a high tech lab in Tanzania, which receives and analyzes the samples from the preparation labs. Additionally, in 2013, according to a BGR official, BGR established a project management unit at the ICGLR headquarters in Burundi, which evaluates the raw data and produces the analytical fingerprints. Better Sourcing Program Responsible-Sourcing Pilot The Better Sourcing Program (BSP), a private company that offers an independently audited due diligence assurance program to enable companies to source tantalum, tin, tungsten, and gold from the region, is a new stakeholder in the region in 2013. The Better Sourcing Program established a pilot program that covers a tantalum supply chain originating from the Republic of the Congo (also known as Congo- Brazzaville), which is the first responsible-sourcing initiative in that country. Better Sourcing Program officials stated that they chose to pilot the initiative in the Republic of the Congo because no other scheme to support producers existed there, because the country is relatively conflict free, and because the government has been cooperative. Some stakeholders that we interviewed noted various challenges to expanding or launching responsible-sourcing efforts in the DRC and adjoining countries. For example: Lack of infrastructure. Some stakeholders reported that inadequate infrastructure in the DRC and adjoining countries affects their ability to operate in the region and expand initiatives to new areas. Representatives from Solutions for Hope stated that infrastructure in the DRC cannot support a large-scale smelter. According to these representatives, the power supply in the DRC can be inconsistent and, because smelting facilities require a large, consistent power supply to function properly, all tin, tantalum, and tungsten currently are exported from the DRC and Rwanda for smelting. Lack of government support. Some stakeholders reported some operational challenges related to the region’s national and provincial governments. For example, stakeholders involved in the Conflict Free Tin Initiative stated that sales of tin from their mine in South Kivu were halted for nearly 2 months in 2013 after the provincial government of the region imposed harsh taxes on the minerals mined there. Additionally, Solutions for Hope officials reported that the DRC’s current tax structure is not conducive to legitimizing gold and that expanding the initiative to include gold is therefore difficult. Lack of buyers for conflict minerals from conflict zones. Several stakeholders and agency officials reported that some companies are reluctant to buy minerals produced in the DRC and adjoining countries because of the high cost of the due diligence required by the Dodd- Frank Act and the perceived reputational risk. For example, according to one industry official, a major challenge to responsible sourcing in the region is that the cost of complying with the SEC rule makes it difficult for SEC-reporting companies to compete in the global market against companies that are not required to perform costly due diligence. In addition, an official from the Responsible Jewellery Council noted that gold is mined in many locations around the world and that production costs must always be taken into account. She stated that because the cost of due diligence for gold is usually proportional to risk, mining and sourcing responsibly in the Great Lakes Region could become more expensive than in other, lower-risk areas and that this represents a challenge to responsible sourcing efforts in the region. Some stakeholders and governments in the region provide publicly available information related to in-region mining of conflict minerals and responsible sourcing initiatives. According to industry officials, the amount of publicly available data reported by responsible sourcing initiatives has increased over the past year. We found that iTSCi publishes various reports on its public website as part of its due diligence system for its members. These reports provide production and export data for tin, tantalum, and tungsten, including amounts traced through the iTSCi program, from three provinces in the DRC and Rwanda, as well as the mineral sales in U.S. dollars (for more details, including the applicable quantitative data, see app. IV). Also on its website, iTSCi publishes third- party audits of member companies, which assess the extent to which the companies have implemented the OECD Due Diligence Guidance and evaluate the companies’ adherence to iTSCi’s traceability and due diligence procedure. Additionally, iTSCi publishes governance assessments covering a range of topics. Examples include the security and political situation in areas without an iTSCi presence and the risks and performance, relative to the OECD Due Diligence Guidance and the SEC rule, of stakeholders that are part of, or play a role in monitoring, the conflict minerals supply chain. A comparison of production data from the conflict-free sourcing initiatives in the context of each country’s or the region’s total production or exports of tin, tantalum, tungsten, or gold is not feasible, because no reliable, comprehensive data on the production or export of conflict minerals for the countries or region are available. However, some quantitative government data on the production and exports of conflict minerals from the DRC and adjoining countries are available (see app. V). For example, the DRC government has published data on production and exports of all four conflict minerals and the Rwandan government has published export and value data for tin, tantalum, and tungsten. In addition, the International Trade Centre, a joint agency of the World Trade Organization and the UN, collects export data from governments that provide some context for the amount of conflict minerals declared as exported from the region, although these data do not identify minerals from conflict-free mines (see app. VI). Some stakeholders indicated that the ICGLR may, at some point, be able to provide production and export data of conflict minerals from its member states, mostly Dodd-Frank-affected countries. This information could increase transparency at the individual country level. ICGLR requires its member states to implement a chain-of-custody tracking system for conflict minerals and to transmit data on mineral flows (i.e., quantities and destinations) at regular intervals to be incorporated in the ICGLR Regional Mineral Tracking Database. According to an ICGLR official, this database is populated with three types of information: (1) a historical record of mine sites in each country, (2) types and quantities of minerals produced at each mine site, and (3) mineral flows from the mine sites. According to ICGLR documentation, the data are used to track, analyze, and reconcile regional mineral flows and will become publicly available to ensure ICGLR’s credibility. In November 2013, the ICGLR official stated that four of the 12 member countries had submitted information for the regional database: Rwanda, Uganda, the DRC, and, to a lesser extent, Burundi. Since we reported in July 2013, no new population-based surveys related to sexual violence in the DRC, Uganda, Rwanda, or Burundi have been published. However, population-based surveys are underway, or being planned, in three of those countries—the DRC, Burundi, and Rwanda. In addition, some new case file data on sexual violence are available for all four countries. However, as we reported in 2011, case file data on sexual violence are not suitable for estimating a rate of sexual violence. Although no new surveys related to sexual violence in the DRC, Uganda, Rwanda, or Burundi have been published since July 2013, population- based surveys in the DRC, Burundi, and Rwanda are underway or planned by ICF International. According to ICF International, data collection for a Demographic and Health Survey (DHS), which is a type of population-based survey, for the DRC is complete, but data resulting from the survey are not expected until September 2014 or later. ICF International also said that fieldwork for a DHS in Rwanda is likely to start in September or October 2014. ICF International indicated that a DHS is planned to start in Burundi in 2014; however, data collection may be delayed by funding gaps and, as a result, the survey may not take place until 2015. Figure 4 shows the anticipated timelines for the population-based surveys on sexual violence that are currently underway or planned in the DRC, Burundi, and Rwanda. It also shows the publication dates for eight population-based surveys that provided data on the rate of sexual violence in eastern DRC, Rwanda, and Uganda that have been published since we started reporting on sexual violence in the region in 2011. Since 2013, some U.S. and UN agencies as well as researchers and an NGO have provided additional case file data on instances of sexual violence in the DRC and adjoining countries. In February 2014, State submitted its annual country reports on human rights practices to Congress, which provided the following data pertaining to sexual violence in the DRC, Burundi, and Uganda. For example, In the DRC, the government reported 18,729 cases of sexual violence in 2012. In Burundi, 3,781 cases of gender-based violence were reported in 2010, according to a report compiled from family development centers throughout the country. In Uganda, 530 cases of rape were registered in 2012, and 301 of the alleged rapists were tried and convicted. In addition, some UN entities reported case file information. For example: MONUSCO reported that between October 1, 2013, and December 5, 2013, it recorded acts of sexual violence against at least 79 women and 28 girls in conflict-affected provinces in the DRC. MONUSCO reported in March 2014 that sexual violence crimes continued to be committed by armed groups and that such crimes were allegedly committed against at least four women, 35 girls, and one man by elements of illegal armed forces and members of the Congolese military and police in January 2014. The United Nations Joint Human Rights Office reported in April 2014 that between the period of January 2010 and December 2013, it registered 3,635 victims of sexual violence throughout the DRC. The report indicated that, for the reporting period, while more than half of the total alleged acts of sexual violence were committee by illegal armed groups, members of the Congolese national military committed less than half of the other alleged acts. Moreover, the DRC government reported case file information. For example: A report published in June 2013 by the DRC Ministry of Gender, the Family, and Child, with support by the UN Population Fund, highlighted cases of sexual violence in seven provinces in the country: Bandundu, Bas Congo, Katanga, Kinshasa, North Kivu, Orientale, and South Kivu. Data in the report indicate that 10,322 incidents of sexual and gender-based violence were reported for the seven provinces in 2011 and that the number increased to 15,654 incidents in 2012. While the data also indicate that most of these assaults were committed by people dressed in civilian clothes—85 percent in 2011 and 78 percent in 2012—incidents involving armed groups between 2011 and 2012 increased in South Kivu from 36 to 76 percent, respectively, and in North Kivu from 32 to 61 percent, respectively. An NGO also published case file information. For example: Médecins sans Frontières reported in March 2014 that in 2012 it provided medical care to a total of 4,037 women, men, and children after incidents of sexual violence in different project locations in the DRC. It reported treating, in a 5-week period in late 2012 through early 2013, 95 of those individuals in one camp for Internally Displaced People in North Kivu. Several factors make case file data unsuitable for estimating rates of sexual violence. First, because case file data are not aggregated across various sources, and because the extent to which various reports overlap is unclear, it is difficult to obtain complete data, or a sense of magnitude, from case files. Second, in case file data as well as surveys, time frames, locales, and definitions of sexual violence may be inconsistent across data collection operations. Third, case file data are not based on a random sample and the results of analyzing these data are not generalizable. The long-running humanitarian crisis in eastern DRC, one of the most volatile areas in Africa, continues to be a concern for the U.S. government and the international community. As we have previously reported, section 1502 of the Dodd-Frank Wall Street Reform and Consumer Protection Act, enacted in 2010, is part of the U.S. effort to address the perpetration of sexual violence and mass killings in the DRC by armed groups who profit from the exploitation and trading of conflict minerals. We have compiled stakeholder data on the production and trade of conflict minerals that demonstrates, to some extent, the degree of transparency related to the conflict minerals trade. The actions undertaken by U.S. agencies in response to the Act could facilitate SEC-reporting companies’ compliance with the SEC rule, promulgated pursuant to the Act, as they conduct due diligence and prepare the required annual reports disclosing the use and the origin of conflict minerals in their products. However, because Commerce has not met the Act’s requirement that it compile by January 2013 a list of smelters and refiners—considered the “choke point” of the conflict minerals supply chain—these companies lack a source of critical information about the conflict minerals supply chain. Some stakeholders indicated that a comprehensive list of conflict minerals smelters and refiners could enable companies that are subject to the SEC rule to maintain transparency of the supply chain, and also provide companies the information they need for their SEC-required conflict minerals disclosure reports. Commerce cited several challenges that have hindered its providing the required list to Congress. However, having an action plan with associated timeframes could better position Commerce to report on the status of its efforts to produce a final list and provide it to Congress and to hold its personnel accountable for completing related activities. To give Congress a sense of Commerce’s efforts to produce a listing of all known conflict minerals processing facilities worldwide, as required by section 1502 of the Dodd-Frank Wall Street Reform and Consumer Protection Act, we recommend that the Secretary of Commerce provide to Congress a plan that outlines the steps, with associated timeframes, to develop and report the required information about smelters and refiners of conflict minerals worldwide. We provided a draft of this report to SEC, State, USAID, and Commerce for their review. Commerce provided written comments, which we have reproduced in appendix VII. SEC and State provided technical comments, which we incorporated as appropriate. We also provided relevant portions of the draft report to some industry associations and other stakeholders of conflict minerals initiatives from whom we had obtained information during our review; some of these stakeholders provided technical comments that we incorporated as appropriate. In its written comments, Commerce noted in response to our recommendation that it had provided and briefed us on a detailed outreach action plan that set forth how it intended to assemble the list of conflict minerals processing facilities required by the Act. However, as we have noted in this report, the document that Commerce provided was an outreach plan, consisting of a list of stakeholders that Commerce intended to contact to obtain information on conflict minerals smelters and refiners. The plan did not indicate a timeframe for completing and submitting to Congress the required listing of conflict minerals processing facilities worldwide. Commerce concurred with our recommendation and noted that it will submit a listing of all known conflict minerals processing facilities worldwide to Congress by September 1, 2014. We are sending copies of this report to appropriate congressional committees. The report is also available at no charge on the GAO website at http://www.gao.gov/. If you or your staffs have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. To determine the extent, if any, to which relevant U.S. agencies have engaged in activities related to responsible sourcing of conflict minerals, we interviewed officials who are cognizant of conflict minerals issues from the Departments of Commerce and State (State) and the United States Agency for International Development (USAID), and the Securities and Exchange Commission (SEC). We reviewed Section 1502 of the Dodd- Frank Wall Street Reform and Consumer Protection Act (Pub. L. No. 111- 203) to identify the requirements for Commerce, State, SEC, and USAID related to implementation of section 1502. We also reviewed and analyzed reports and other documents from the agencies, such as the U.S. Strategy to Address the Linkages Between Human Rights Abuses, Armed Groups, Mining of Conflict Minerals, and Commercial Products; the conflict minerals rule; and maps of mineral-rich zones and areas under control of armed groups in the DRC. In addition, we reviewed and analyzed press releases, statements, plans, and guidance pertaining to conflict minerals and responsible sourcing that were issued by U.S. agencies. We also reviewed notes and agendas of responsible sourcing forums and other meetings attended by U.S. officials. To analyze what is known about the status of, and any information provided by, initiatives focused on responsible sourcing of conflict minerals from the DRC and adjoining countries, we interviewed officials and reviewed and analyzed documents from State, USAID, and the United Nations Group of Experts on the Democratic Republic of the Congo (UNGoE); interviewed representatives and reviewed and analyzed guidance documents, reports, and presentations from foreign government, industry associations, multilateral organizations, companies, and nongovernmental organizations (NGOs). We selected these stakeholders based on their expertise on responsible sourcing issues, because they represented a range of perspectives on conflict minerals, and because we had established contacts with these entities on our last review. In addition, some of the stakeholders we talked to have been working on the ground in the DRC. The stakeholders we spoke with constitute a nongeneralizable sample, and the information we gathered from them cannot be used to infer views of other stakeholders cognizant of conflict minerals issues. To determine which initiatives had expanded and which were new initiatives in the region, we interviewed U.S. agency officials and relevant stakeholders and reviewed documentation from initiatives in the region. This report covers initiatives on which we have previously reported and new initiatives since our 2013 report, as described by stakeholders we interviewed. However, it is possible that the agency officials and stakeholders with whom we spoke may be unaware of other stakeholders and/or responsible sourcing initiatives active in the DRC and region. To determine the starting dates of the initiatives, we interviewed stakeholders and reviewed the websites of the various initiatives. To demonstrate what information stakeholders have reported regarding responsible sourcing initiatives, we reviewed and analyzed information published on the websites associated with the various responsible sourcing initiatives. When reporting on information provided by the initiatives and stakeholders, we are referring to information such as reports published, amount of minerals mined per region/country, amount of minerals exported, and value of minerals produced and exported, including data covering 2012 through 2014. The information gathered cannot be generalized and cannot be used to infer views of other stakeholders cognizant of conflict minerals issues. To demonstrate other sources of publicly available data on conflict minerals, we collected and analyzed stakeholder and government data covering 2003 through 2014. Because the data were not used to support findings, conclusions, or recommendations, we did not assess their reliability. To demonstrate the output of one of the in-region initiatives, we collected production and export data for the ITRI Supply Chain Initiative (iTSCi). We reviewed the production reports for three provinces in the DRC and for Rwanda published on iTSCi’s website, abbreviated the data to include those most relevant to this study, and converted the data to tons and thousands of U.S. dollars. A limitation of the data is that the disaggregated production data–by mineral and mine–are proprietary to iSTCi members, so it is not possible to quantify the total amount of any of the minerals separately. We also collected conflict mineral production and export data from the websites of the governments of the DRC and Rwanda and from the International Trade Centre. We are presenting these data in the appendixes of the report because these are the only publicly available data we found for sourcing conflict minerals from the region. There is no distinction in these data between minerals that are conflict-free and those that have supported armed groups. Moreover, none of these data can be generalized or be used to infer the total production or export of conflict minerals from the DRC and the adjoining countries. In response to a mandate in the Dodd-Frank Wall Street Reform and Consumer Protection Act that GAO submit an annual report that assesses the rate of sexual violence in war-torn areas of the DRC and adjoining countries, we identified and assessed any additional published information available on sexual violence in war-torn eastern DRC, as well as three adjoining countries that border the DRC—Rwanda, Uganda, and Burundi—since our 2013 report on sexual violence in these areas. During the course of our review, we interviewed officials from State and USAID and interviewed NGO representatives and researchers to discuss the collection of sexual violence-related data—including population-based surveys and case file data—in the DRC and adjoining countries. Specifically, we followed up with researchers and representatives from those groups we interviewed for our prior review on sexual violence rates in eastern DRC and adjoining countries, including a representative from the Human Rights Center at the University of California, Berkeley, School of Law and others officials. The team also traveled to New York City to meet with officials from the United Nations Population Fund, United Nations High Commissioner for Refugees, and the United Nations Special Representative of the Secretary-General on Sexual Violence in Conflict. We also conducted Internet literature searches to identify new academic articles containing any additional information on sexual violence since our 2013 report. We conducted this performance audit from September 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 5 depicts the most recent map published by the Department of State. The map focuses on the exploitation of tin, tantalum, tungsten, and gold in the provinces of North and South Kivu and parts of Orientale, Maniema, and Katanga provinces. When the SEC adopted the conflict minerals rule in August 2012, it published a flowchart summary of the final rule to guide SEC-reporting companies affected by the rule through the disclosure process (see fig 6). In general, the process reflects that an SEC-reporting company needs to (1) determine whether its manufactured products contain conflict minerals, (2) determine whether conflict minerals are necessary to the functionality or production of the product and if it originated in the DRC or an adjoining country, and (3) possibly conduct due diligence and potentially provide a Conflict Minerals Report. The ITRI Supply Chain Initiative (iTSCi) publishes qualitative and quantitative information on its projects in the DRC and Rwanda on its website. The information includes production data for minerals that have been mined and traded employing the iTSCi traceability system, audits of iTSCi member companies, and assessments of the political and security situation in various sites in the DRC and Rwanda. The mineral production and export data include minerals mined under the auspices of other stakeholder-led initiatives in the region. Almost all of the initiatives employ the iTSCi traceability system to track the mining and trading of minerals along the supply chain. One limitation of the data is that the production data–disaggregated by mineral and mine–are proprietary to iSTCi members. Therefore, it is not possible to quantify the total quantity or value of any of the minerals separately. Tables 2-7 provide iTSCi production in metric tons and mineral sales in thousands of U.S. dollars for tin, tungsten, and tantalum coming from several provinces in the DRC, including Maniema, South Kivu, and Katanga. Tables 8 and 9 provide iTSCi production in tons and mineral sales in thousands of U.S. dollars for tin, tungsten and tantalum in Rwanda. The DRC Ministry of Mines published data on the volume of production and exports for tin, tantalum, tungsten, and gold for 2003-2012 (see tables 10-13). The Rwanda Natural Resources Authority published data on the volume and value of tin, tantalum and tungsten exported from Rwanda from January to August of 2013 (see tables 14 and 15). The International Trade Centre (ITC), a joint agency of the World Trade Organization and the United Nations (UN) compiles a Trade Map with data including the global exports in tons and export value of tin, tantalum, tungsten and gold from the DRC and adjoining countries, as available for fiscal years 2009-2013 (see tables 16-31 for ITC data as of June 2014). The ITC calculated these data using the UN Commodity Trade Statistics Database, which compiles trade data from UN member countries. These data do not provide a comprehensive depiction of the flow of conflict minerals exported from the DRC and adjoining countries; rather, they are an estimate based on imports data from reporting partner countries. Furthermore, these data may include imports of conflict minerals that have financed armed groups. In many instances, there are no data listed for a particular mineral or year. There were no data for import volume or value of conflict minerals in the ITC database for the Republic of the Congo (Congo-Brazzaville) or South Sudan. In addition to the individual named above, Godwin Agbara (Assistant Director), Russ Burnett, Etana Finkler, Justin Fisher, Julia Jebo Grant, Ernie Jackson, Jill Lacey, Reid Lowe, Andrea Riba Miller, and John O’Trakoun made key contributions to this report.
Armed groups in eastern DRC continue to commit severe human rights abuses and profit from the exploitation of minerals, according to reports from the United Nations. Congress included a provision in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act to address the trade in “conflict minerals”—tin, tantalum, tungsten, and gold. Section 1502 of the Act directed several U.S. agencies to report or focus on issues related to conflict minerals. This report examines, among other things, (1) the extent to which relevant U.S. agencies have undertaken activities related to responsible sourcing of conflict minerals and (2) what is known about the status of, and information provided by, stakeholder initiatives focused on responsible sourcing of conflict minerals from the DRC and adjoining countries. GAO reviewed and analyzed documents and data covering 2003 through 2014. We interviewed representatives from State, USAID, SEC, Commerce, nongovernmental organizations, industry, and international organizations who are cognizant of conflict minerals issues. Since the Dodd-Frank Wall Street Reform and Consumer Protection Act (the Act) was passed in 2010, relevant U.S. agencies have undertaken various activities related to responsible sourcing of conflict minerals from the Democratic Republic of the Congo (DRC) and adjoining countries. In response to the Act, the Department of State (State) and the U.S. Agency for International Development (USAID) developed a strategy in 2011 to address the linkages among human rights abuses, armed groups, and the mining of conflict minerals and are implementing various strategy objectives. The Securities and Exchange Commission (SEC) issued a rule in 2012 requiring certain companies to disclose the source and chain of custody of necessary conflict minerals in their products. However, the Department of Commerce (Commerce) has not yet compiled a list of all conflict minerals processing facilities—smelters and refiners—known worldwide, required by January 2013 by the Act. Commerce cited difficulties with, for example, tracking conflict minerals operations but told GAO that it had completed outreach efforts with the majority of stakeholders. Commerce did not have a plan of action, with associated time frames, for developing and reporting on the list of conflict minerals processing facilities worldwide. Standard practices in program and project management include, among other things, developing a plan to execute specific projects needed to obtain defined results within a specific time frame. An action plan with timeframes could better position Commerce to report on the status of its efforts to produce a final list to Congress and to hold its personnel accountable for completing activities. Over the past several years, a number of stakeholders—foreign governments, multilateral organizations, and industry associations, among others—have expanded, or made plans to expand, initiatives focused on responsible sourcing of conflict minerals in the DRC and adjoining countries. These stakeholder initiatives, such as in-region tracing of conflict minerals and development of guidance documents and audit protocols, have grown to include new mine sites, countries, and smelters. For example, the Conflict-Free Smelter Program, an industry-led effort, has expanded from 26 smelters certified as conflict-free in 2013 to 85 smelters as of April 25, 2014 (see table). New stakeholder initiatives are also underway or planned in the region, including the first responsible sourcing initiative in the Congo-Brazzaville. Some initiatives have yielded publicly available information, including data on production of conflict-free minerals and export data. For example, one stakeholder has reported production data for tin, tungsten, and tantalum from three provinces in the DRC and in Rwanda. Source: Conflict-Free Sourcing Initiative data, GAO (analysis). GAO recommends that the Secretary of Commerce provide Congress a plan that outlines the steps, with associated timeframes, to develop and report the required information about smelters and refiners of conflict minerals worldwide. Commerce concurred with GAO's recommendation and noted that it will submit a listing of all known conflict minerals processing facilities worldwide to Congress by September 1, 2014.
Although training for employed workers is largely the responsibility of employers and individuals, publicly funded training seeks to fill potential gaps in workers’ skills. In recent years, the federal government’s role in training employed workers has changed. In 1998, WIA replaced the Job Training Partnership Act after 16 years and, in doing so, made significant changes to the nation’s workforce development approach. Before implementation of WIA, federal employment and training funds were primarily focused on helping the unemployed find jobs; the WIA legislation allowed state and local entities to use federal funds for training employed workers. TANF block grants to states also allowed more flexibility to states in serving low-wage workers and, like WIA funds, federal funding authorized under TANF can now be used for training employed workers, including low-wage workers. WIA funds provide services to adults, youth, and dislocated workers and are allocated to states according to a formula. States must allocate at least 85 percent of adult and youth funds to local workforce areas and at least 60 percent of dislocated worker funds to local workforce areas. For training employed workers, the WIA funds used are from those appropriated to provide services to all adults as well as dislocated workers, funded at about $2.5 billion for program year 2001. WIA also permits states to set aside up to 15 percent of WIA funds allocated for adults, youth, and dislocated workers to their states to support a variety of statewide workforce investment activities that can include implementing innovative employed worker programs. These funds can also be spent for providing assistance in the establishment and operation of one-stop centers, developing or operating state or local management information systems, and disseminating lists of organizations that can provide training. In a previous GAO report, we reported that several states used these state set-aside funds specifically for implementing employed worker training. WIA also required that all states and localities offer most employment and training services to the public through the one-stop system—about 17 programs funded through four federal agencies provide services through this system. For this system, WIA created three sequential levels of service—core, intensive, and training. The initial core services, such as job search assistance and preliminary employment counseling and assessment, are available to all adults and WIA imposes no income eligibility requirements for anyone receiving these core services. Intensive services, such as case management and assistance in developing an individual employment plan, and training require enrollment in WIA and generally are provided to persons judged to need more assistance. In order to move from the core level to the intensive level, an individual must be unable to obtain or retain a job that pays enough to allow the person to be self-sufficient, a level that is determined by either state or local workforce boards. In addition, to move from the intensive level to the training level, the individual must be unable to obtain other grant assistance, such as Department of Education grants, for such training services. Under WIA, states are encouraged to involve other agencies besides workforce development—including the agencies responsible for economic development and the Department of Health and Human Services’ TANF program—in the planning and delivery of services in the one-stop center system. WIA performance measures are designed to indicate how well program participants are being served by holding states and local areas accountable for such outcomes as job placement, employment retention, and earnings change. WIA requires the Department of Labor and states to negotiate expected performance levels for each measure. States, in turn, must negotiate performance levels with each local area. The law requires that these negotiations take into account such factors as differences in economic conditions, participant characteristics, and services provided. WIA holds states accountable for achieving their performance levels by tying those levels to financial sanctions and incentive funding. States meeting or exceeding their measures may be eligible to receive incentive grants that generally range from $750,000 to $3 million. States failing to meet their expected performance measures may suffer financial sanctions. If a state fails to meet its performance levels for 1 year, Labor provides technical assistance, if requested. If a state fails to meet its performance levels for 2 consecutive years, it may be subject to up to a 5 percent reduction in its annual WIA grant. In fiscal year 2000—the latest for which data are available—states reported spending $121.6 million in federal TANF funds specifically for education and training. Prior to WIA, welfare reform legislation created the TANF block grant, which provided flexibility to states to focus on helping needy adults with children find and retain employment. The TANF block grant is a fixed amount block grant of approximately $16.7 billion annually. Although the TANF program was not required to be part of WIA’s one-stop system, states and localities have the option to include TANF programs. As we have previously reported, many are working to bring together their TANF and WIA services. The TANF block grants allow states the flexibility to decide how to use their funds—for example, states may decide eligibility requirements for recipients, how to allocate funds to a variety of services, and what types of assistance to provide. Work-related activities that can be funded under TANF encompass a broad range of activities including subsidized work, community service programs, work readiness and job search efforts, as well as education and training activities such as on-the-job training, vocational education, and job skills training related to employment. TANF funds available to states can be used for both pre- and postemployment services. Because of the increased emphasis on work resulting from welfare reform and time limits for receiving cash assistance, state offices responsible for TANF funds may focus largely on helping their clients address and solve problems that interfere with employment, such as finding reliable transportation and affordable child care, especially for those in low-paying jobs. In recent years, several federal demonstration or competitive grants were available for training employed workers. For example, the Department of Labor’s Welfare-to-Work state and competitive grants were authorized by the Congress in 1997 to focus on moving the hardest-to-employ welfare recipients and noncustodial parents of children on welfare to work and economic self-sufficiency. Overall, welfare-to-work program services were intended to help individuals get and keep unsubsidized employment. Allowable activities included on-the-job training, postemployment services financed through vouchers or contracts, and job retention and support services. In addition, shortly after WIA was enacted, Labor gave all states an opportunity to apply for $50,000 planning grants for employed worker training. States were instructed to develop policies and program infrastructures for training employed workers and to indicate their available resources, anticipated needs, and plans for measuring success. The Secretary of Labor also awarded larger, 2-year competitive demonstration grants, operating from July 1, 1999, to June 20, 2001, for training employed workers. In addition, HHS is supporting the Employment Retention and Advancement (ERA) study of programs that promote stable employment and career progression for welfare recipients and low-income workers. In 1998, for the planning phase of this project, HHS awarded 13 planning grants to states to develop innovative strategies. HHS has contracted with the Manpower Demonstration Research Corporation to evaluate 15 ERA projects in eight states, comparing the outcomes of those who received services with a control group that did not. About the same time as the enactment of WIA, the Congress passed the American Competitiveness and Workforce Improvement Act of 1998, which authorized some funding for technical skills training grants as part of an effort to increase the skills of American workers. This legislation raised limits on the number of high-skilled workers entering the United States with temporary work visas, imposing a $500 fee on employers— later raised to $1,000—for each foreign worker for whom they applied. Most of the money collected is to be spent on training that improves the skills of U.S. workers. Labor awards the skill grants to local workforce investment boards, thereby linking the skill grant program with the workforce system. The workforce boards may use the funds to provide training to both employed and unemployed individuals. In a previous GAO report on these grants, we reported that, for grantees that collected participant employment data (39 of 43 grantees), approximately three- fourths of the skills training grant participants are employed workers upgrading their skills. In addition to being able to use WIA state set-aside funds for different activities including training employed workers, states can authorize funds from other available sources, such as state general revenue funds or funds related to unemployment insurance trust funds. States can also fund such training in conjunction with other federal funding grants, such as the Department of Housing and Urban Development’s Community Development Block Grant. This grant can be used for economic development activities that expand job and business opportunities for lower-income persons and neighborhoods. These state training programs serve primarily to help businesses address a variety of issues including skill development, competitiveness, economic development, and technological changes. States can fund training for employed workers through various offices. Workforce development offices have historically focused on training for unemployed and economically disadvantaged individuals, while economic development offices have typically focused on helping employers foster economic growth for states. Economic development offices may also provide employment and training opportunities to local communities, generally by working with employers to meet skill shortages and long-term needs for qualified workers. States have more often subsidized training tailored for businesses through their economic development offices, according to reports published by the National Governors’ Association. Most of the local workforce boards reported that they provided assistance to train employed workers, including funding training, as did all 16 states that we contacted. Two-thirds of the workforce boards responding to our survey provided assistance to train employed workers in a variety of ways, and nearly 40 percent of the workforce boards specifically targeted funds on training for these workers. Furthermore, a greater percentage of workforce boards reported funding employed worker training in program year 2001 than in program year 2000. The 16 states we contacted all funded training for employed workers and most of these states funded and coordinated this training from two or more offices. Few states and local workforce boards were able to provide information on the number of low- wage workers who participated in training because many did not categorize training participants by wage or employment status. Generally, local areas and states funded training for employed workers with various federal, state, local, or other resources, although WIA and other federal funds were the most common sources of funding for this training. Two-thirds of the local workforce boards reported performing tasks that facilitated the provision of employed worker training, such as partnering with employers to develop training proposals and providing individual services to employed workers. For example, one workforce board helped a local manufacturer obtain a state grant to retrain its employees through a project to upgrade skills. Another workforce board helped a local company by arranging English as a Second Language (ESL) classes for its employees through a community college. Other workforce boards helped employed workers establish individual training accounts with eligible training providers. However, some workforce boards responded that they did not specifically target training for employed workers because their overall funds were so limited that such training was not a priority. Several respondents explained that their clients were served based on need and that individuals with jobs were not a priority for services because of the sizeable unemployed population served by the workforce boards. Nearly 40 percent of the local workforce boards responding to our survey specifically targeted funds for employed worker training. The number of boards that reported budgeting or spending funds on such training in program years 2000 or 2001 varied by state. (See fig. 1.) Most states had at least one workforce board that targeted funds for such training. Furthermore, a greater percentage of workforce boards reported funding such training in program year 2001 than in the previous year. Of all the workforce boards responding to our survey, 22 percent reported spending funds specifically for training employed workers in 2000 and 31 percent reported spending funds on training these workers in 2001. When they funded training for employed workers, local workforce boards reported doing so in a variety of ways. For example, in cooperation with the economic development office, one workforce board in West Virginia worked with local businesses to identify and fund training programs to meet their business needs. At a workforce board we visited in Texas, officials received a competitive state grant to fund employed worker training to meet critical statewide industry needs in health care, advanced technology, and teaching. Some local workforce boards that had not specifically targeted training for employed workers were planning to become involved in such training or had begun discussions about developing policies for this type of training. For example, a workforce official in California cited plans to use $95,000 from a federal grant to train employed workers in information technology. Another workforce board, in Minnesota, planned to open a training center for employed workers that would focus on business needs within the local community, such as health care, and provide training through a local community college. All of the 16 states we contacted funded training for employed workers. In most of the 16 states, training for employed workers was not limited to the efforts of a single state office, but was funded by two or more state offices with training responsibilities. In fact, in 8 states, all three offices we contacted funded training for employed workers. In addition to offices responsible for workforce development, economic development, and TANF funds used for education and training, state officials also identified education departments—including those of higher education—within their states as important funding sources for training employed workers. In New York, for example, training funds were spread across about 20 state agencies, according to one state official. When more than one office within a state funded training for employed workers, most state offices reported coordinating their training efforts both formally and informally. Formal coordination methods that state officials cited included workgroups and advisory boards (15 states), memoranda of understanding or mutual referral agreements between offices (12 states), or coordinated planning (12 states). For example, Indiana’s economic development office noted that it had formal linkages with the workforce office and that they collaborated on a lifelong learning project. Offices in 9 of the 16 states also cited other means of coordination, such as having common performance measures. For example, Oregon’s workforce development office reported that state agencies were held to a set of statewide performance measures. In addition to these formal methods of coordination, all states cited informal information sharing as a key means of coordination among offices within their state. For example, an economic development official in one state said he used his telephone speed dial to contact his workforce development colleague, and a workforce development official in another state told us she had frequent working lunches with the state official responsible for TANF funds used for education and training. In addition, in a few states, offices jointly administered training programs within their states. In New York, for example, workforce development and economic development offices comanaged a high-skill training grant program for new and employed workers using $34 million in state general revenue funds over 3 years. For this training program, begun in July 2001, both offices reviewed training proposals, and the workforce department created contracts and reimbursed companies for part of the training costs. Similarly, in Pennsylvania, five departments—Labor and Industry, Public Welfare, Community and Economic Development, Education, and Aging—jointly administered an industry-specific training grant initiative that primarily funded training for low-wage health care workers. This joint effort represented a new approach for Pennsylvania, because previously the economic development office was responsible for training that was tailored, or customized, to employers. Under this joint program, a state committee with representatives from each of the five departments reviewed grant proposals and each agency funded a portion of approved grants. Finally, several states had reorganized their workforce responsibilities and funding, either by consolidating workforce development and economic development responsibilities or combining responsibilities for WIA and TANF funds. For example, Montana and West Virginia transferred WIA responsibilities and funding from the workforce office to the economic development office. According to state officials, this approach was intended to better align and integrate workforce and economic development goals for the state. In Texas, the workforce commission— which was created in 1995 to consolidate 10 agencies and 28 programs— was responsible for WIA and TANF block grants, among others. In Florida, a public-private partnership, governed by the state’s workforce board, became responsible in October 2000 for all workforce programs and funds in the state, including WIA, TANF, and Welfare-to-Work grant funds; this shift was intended to create a better link between workforce systems and businesses in the state. Few state officials or local workforce boards were able to report the number of low-wage workers who participated in training, for various reasons. For example, some officials told us they did not categorize training participants by wage. Other officials reported that, although they targeted low-wage workers for training, they did not categorize training participants by employment status. Although states we contacted could not always provide us with the number of low-wage workers participating in training, 13 of 16 states we contacted reported that they funded training targeted to low-wage workers. Additionally, when WIA funds are limited, states and local areas must give priority for adult intensive and training services to recipients of public assistance and other low-income individuals. Local workforce boards reported that WIA and other federal funds were the most common source of funds used to support employed worker training. Federal funding for these training efforts included WIA funding— both local and the state set-aside portion—TANF funds, and local Welfare- to-Work funds. (See fig. 2.) In addition, local boards described various other important funding sources such as Labor’s demonstration grants for training employed workers and the federal skills training grants intended to train workers in high-demand occupations. For those local workforce boards spending funds specifically for training employed workers, their allocation of local WIA funds most often paid for these training efforts, and more reported using local WIA funds in program year 2001 than in the previous year. However, while nearly all workforce boards responding to our survey were aware that WIA allowed funds to be used for training employed workers, some reported that there were too many priorities competing for the WIA funds. Two local officials also noted that the federal funds allocated to states under WIA—the state set- aside funds—in their states were awarded competitively, which made it difficult to consistently serve employed workers because they were uncertain that they would receive these grants in the future. Local workforce boards also combined funding from several sources— including federal, state, local and foundation support—to train employed workers. For example, one workforce board in Pennsylvania combined $50,000 in funds from the state WIA set-aside with about $1.8 million from the state’s community and economic development department to fund such training. Although financial support from local entities or foundations was available to a lesser extent, some workforce boards were able to mix these with funds from other sources. For example, in California, one workforce board funded training for employed workers with a combination of foundation grants and fees for services from training for employers in addition to TANF funds, Welfare-to-Work and other competitive grants from Labor, and state funds. States reported that WIA and other federal funds were the most common sources of funding used for training employed workers. (See fig. 3.) Twelve of the 16 states we contacted used three or more sources of funds for this purpose. Of the 16 states we contacted, 13 used their WIA state set- aside funds for training employed workers. For example, in Texas, nearly $11 million was awarded competitively to 10 local workforce boards, and the state projected that over 9,000 employed workers would receive training. Eleven states also used TANF funds to train employed workers. States also reported using state general revenue funds, funds related to Unemployment Insurance (UI) trust funds, such as penalty and interest funds or add-ons to UI taxes, and funds from other sources such as community development block grants or state lottery funds. (See table in app. III.) In their training initiatives for employed workers, states and local workforce boards focused on training that addressed specific business needs and emphasized certain workplace skills. States and local workforce boards gave priority to economic sectors and occupations in demand, considered economic factors when awarding grants, and funded training that was tailored or customized to specific employers. States and local workforce boards focused most often on training provided by community or technical colleges that emphasized occupational skills and basic skills. Most of the 16 states we contacted focused on certain economic sectors or occupations in which there was a demand for skilled workers. Twelve states had at least one office, usually the economic development office, which targeted the manufacturing sector for training initiatives. States also targeted the health care and social assistance sector (which includes hospitals, residential care facilities, and services such as community food services) and the information sector (which includes data processing, publishing, broadcasting, and telecommunications). New York took a sector-based approach to training by funding grants to enable employees to obtain national industry-recognized certifications or credentials, such as those offered through the computer software or plastics industries. Other training programs focused on occupations in demand. For example, in Louisiana, two state offices funded training that gave preference to occupations with a shortage of skilled workers, such as computer scientists, systems analysts, locomotive engineers, financial analysts, home health aides, and medical assistants. Of the 148 local workforce boards that specifically funded training for employed workers in 2001, the majority of workforce boards targeted particular economic sectors for training these workers. As with the states, most often these sectors were health care or manufacturing. (See fig. 4.) For example, workforce boards we visited in Florida, Minnesota, Oregon, and Texas became involved in funding or obtaining funding for local initiatives to train health care workers, such as radiographers and certified nursing assistants, that hospitals needed. Some states considered local economic conditions, such as unemployment rates, in their grant award criteria in addition to, or instead of, giving priority to certain economic sectors and occupations. For example, California’s Employment Training Panel must set aside at least $15 million each year for areas of high unemployment. Similarly, in Illinois and Indiana, the state economic development offices considered county unemployment or community needs in awarding training funds. Florida’s workforce training grants gave priority to distressed rural areas and urban enterprise zones in addition to targeting economic sectors. In addition, most state economic development offices (13 of 16) and more than half of the state workforce development offices (9 of 16) we contacted funded training that was tailored or customized to specific employers’ workforce needs. For economic development offices, such customized training was not new: these offices have typically funded training for specific companies as a means of encouraging economic growth within their states, and in some cases have done so for a long time. For example, California has funded training tailored to specific employers’ needs since 1983 through its Employment Training Panel. This program spent $86.4 million in program year 2000 to train about 70,000 workers; nearly all of them were employed workers according to state officials. However, for many state workforce development offices, funding customized training was a shift in their approach to workforce training, one that could strengthen the links between employees and jobs. With customized training, local employers or industry associations typically proposed the type of training needed when they applied for funding and often selected the training providers. Examples of customized training initiatives sponsored by workforce development offices include the following: In Indiana, the state workforce office has sponsored a high-skills, high- wage training initiative since 1998 to meet employers’ specific needs for skilled workers in information technology, manufacturing, and health. This effort is part of a statewide initiative for lifelong learning for the existing workforce. In Hawaii, the workforce office established a grant program for employer consortiums to develop new training that did not previously exist in the state. In Louisiana, the workforce office has funded a training program customized for employers who had been in business for at least 3 years. It required that the company provide evidence of its long-term commitment to employee training. In the states we contacted, many customized training programs required that grant applicants—usually employers—create partnerships with other industry or educational organizations. For example, Oregon’s workforce development office required local businesses to work with educational partners in developing grant proposals. One local workforce board we visited in Oregon collaborated with a large teaching hospital and its union to obtain funding for training hospital employees, and local one-stop staff partnered with nursery consortia and community colleges to obtain funds to upgrade the skills of agricultural workers. Similarly, in its high-skill training grant program, New York’s workforce development office required employers to form partnerships with labor organizations, a consortium of employers, or local workforce investment boards. In at least 11 of the 16 states we contacted, the programs also required employers to provide matching funds for training employed workers, which can help offset costs to the state for training as well as indicate the strength of the employers’ commitment to training. States that had requirements for matching funds—often a one-for-one match—included Indiana, Minnesota, Montana, New Hampshire, New York, Oregon, Pennsylvania, Tennessee, Texas, Utah, and West Virginia. Utah’s economic development office required a lower match from rural employers, and Indiana’s match varied case-by-case. Sometimes states required other kinds of corporate investments as a condition for obtaining funds for training employees. For example, in Tennessee, companies participating in a job skills training program for high technology jobs were required to make a substantial investment in new technology. In addition, several states included certain requirements in their eligibility criteria to address potential concerns about whether public funds were being used to fund training that businesses might otherwise have funded themselves. For example, in Louisiana and West Virginia, the workforce office requires employers to provide evidence satisfactory to the office that funds shall be used to supplement and not supplant existing training efforts. Although states reported funding many types of training for employed workers, occupational skills training and basic skills training were the most prevalent. Fifteen of the 16 states we contacted funded occupational skills training—such as learning new computer applications—for employed workers. In Tennessee, for example, the economic development office spent more than $27 million of state funds in program years 2000 and 2001 on a job skills training initiative for workers in high-skill, high-technology jobs, according to a state official. Nearly all states also reported funding basic skills training, including in basic math skills and ESL, for employed workers with low levels of education. For example, Texas funded ESL training in workplace literacy primarily for Vietnamese and Spanish speaking workers participating in health care training. Local workforce boards also reported funding many types of training; however, occupational skills training was most frequently provided to employed workers. (See fig. 5.) For example, of the local workforce boards that spent funds to train employed workers, in program year 2001, 90 percent funded occupational training to improve and upgrade workers’ skills. Forty-seven percent of the local workforce boards also funded, in program year 2001, basic skills training for employed workers. The next most prevalent type of training funded for employed workers was in soft skills, such as being on time for work, and 34 percent of local workforce boards funded this type of training in program year 2001. Community or technical colleges were often used to train employed workers, according to both state and local officials we contacted. For example, 78 percent of local workforce boards that spent funds to train employed workers reported that community or technical colleges were training providers in program year 2001. (See fig. 6.) State and local workforce officials also cited using private training instructors and employer-provided trainers, such as in-house trainers. In targeting training to low-wage workers, state and local officials addressed several challenges that hindered individuals’ and employers’ participation in training. Workforce officials developed ways to address the personal challenges low-wage workers faced that made participating in training difficult. In addition, workforce officials we visited identified ways to address employer reluctance to support training efforts. Despite attempts to address these issues, however, challenges to implementing successful training still exist. For example, state and local officials reported that the WIA performance measure that tracks adult earnings gain and certain funding requirements that accompany some federally funded training programs, may limit training opportunities for some low- wage workers. State and local officials developed a number of approaches to overcome some of the challenges faced by low-wage workers. They noted that many low-wage workers have a range of personal challenges—such as limited English and literacy skills, childcare and transportation needs, scheduling conflicts and financial constraints, and limited work maturity skills—that made participating in training difficult. However, many officials also reported several approaches to training low-wage workers. Offering workplace ESL and literacy programs were some approaches used by officials to address limited English and literacy skills among low- wage workers. For example, one workforce board in Minnesota used a computer software program to develop literacy among immigrant populations. Another state workforce official in Oregon reported customizing ESL to teach language skills needed on the job. In addition, some of the employers we visited provided training to their employees in their native language or taught them vocational ESL. Officials we visited in Texas offered a 5-week vocational ESL course before the start of the certified nursing assistant training program primarily to help prepare Vietnamese and Spanish speaking students who were not fluent in English. Many low-wage workers faced challenges securing reliable transportation and childcare, particularly in rural areas and during evening hours. Several state and local officials noted that assisting low-wage workers with transportation and childcare enabled them to participate in training. One program in Florida provided childcare and transportation to TANF-eligible clients. In Minnesota, local officials told us that they provided transportation for program participants. Participants used the agency’s shuttle bus free-of-charge until they received their second paycheck from their employer. After the second paycheck, the individual paid a fee for the shuttle and was encouraged and supported in finding transportation on their own. Providing on-site, paid, or flexible training were methods used to address scheduling conflicts and financial constraints experienced by low-wage workers. Many workforce boards that identified approaches on our survey cited various methods of providing training to low-wage workers that helped officials address some of the challenges faced by low-wage workers. These methods included offering training at one-stops or through distance learning and teleconferencing courses. For example, an employer in California paid employees for 40 hours of work, but allowed 20 hours of on-site training during that time. In addition, some hospitals permitted flexible schedules for employees who sought additional training for career advancement. Offering additional assistance and incentives were approaches identified by officials for improving low-wage workers’ limited work maturity skills such as punctuality and appropriate dress. Officials we visited in Texas reported that they helped low-wage workers develop better skills for workplace behavior. For example, they helped clients understand the need to call their employer if something unexpected happens, like a flat tire, that prevents them from coming to work. In addition, another workforce board in West Virginia reported that they provided a $50.00 incentive to the employee for perfect attendance during the first 6 weeks of work. State and local officials developed a number of ways to address the concerns of employers who were reluctant to participate in low-wage worker training. According to state and local officials, employers’ reservations about participation stemmed from different concerns, including the fears that better trained employees would find jobs elsewhere. Officials reported that other employers were hesitant to participate in low-wage worker training because of paperwork requirements or the time and expertise they believed were involved in applying for state training grants. Despite these concerns, state and local officials identified approaches to encourage employer participation. According to officials we contacted, some employers said that if their employees participated in training, they would seek jobs elsewhere. Officials addressed this perception by forming partnerships with employers and educators and offering training that corresponded to specific career paths within a company. For example, a workforce board we visited in Oregon partnered with a local nursery, a landscaping business, and a community college to train entry-level workers in agriculture and landscaping to move up into higher-skilled and better paying positions at the same company. These career paths also addressed the concern, expressed by some employers, that too few employees were qualified to fill positions beyond the entry level. Officials found other ways to alleviate employers’ fears. Officials in Oregon encouraged trainees at a hospital to stay with their current employer by requiring them to sign a statement of intent regarding training. The hospital trained employees after they signed an agreement that asked for a commitment that they remain with the employer for a specific amount of time in return for training. State and local officials noted that some employers were also reluctant to have their employees participate in government-funded training programs because they believed that certain data collection and reporting requirements were cumbersome. For example, state workforce officials we contacted reported that some employers found it difficult to get employees to fill out a one-page form regarding income as required to determine eligibility for certain funds, such as TANF. In an effort to ease the funding paperwork burden, state officials we contacted in West Virginia were working towards reducing the application paperwork required for employers to obtain worker-training dollars. Workforce officials also reported that some employers were hesitant to apply for federally funded training grants because they believed that they did not have the time or the expertise to apply for such grants. To address this, workforce officials we visited in Oregon worked with union representatives and training providers to co-write training grant proposals. The workforce officials we visited told us that the involvement of the union was a key factor in the training initiative’s success. Prior to this cooperative effort, the employer had not been responsive to workers’ needs and the involvement of the union helped to bridge the gap between worker and employer needs. State and federal funding requirements—such as WIA performance measures, time limits, and participant eligibility—may limit training opportunities for some low-wage workers. Under WIA, performance measures hold states accountable for the effectiveness of the training program. If states fail to meet their expected performance levels, they may suffer financial sanctions. State funding regulations for some training initiatives, such as TANF-funded projects, required the funds to be used within a specific time period. Because local areas must wait for states to allocate and disburse the funding, local officials sometimes had less than 1 year to use the funding. Finally, individuals are sometimes eligible for services based on their income, especially for TANF or WIA local funds. Depending on the level at which local areas set eligibility requirements, some low-wage workers may earn salaries that are still too high to be eligible for services provided by these training funds. WIA established performance measures to provide greater accountability and to demonstrate program effectiveness. These performance measures gauge program results in areas such as job placement, employment retention, and earnings change. (See table 1.) Labor holds states accountable for meeting specific performance outcomes. If states fail to meet their expected performance levels, they may suffer financial sanctions; if states meet or exceed their levels, they may be eligible to receive additional funds. A prior GAO report noted that the WIA performance levels are of particular concern to state and local officials. If a state fails to meet its performance levels for one year, Labor provides technical assistance, if requested. If a state fails to meet its performance levels for two consecutive years, it may be subject to up to a five percent reduction in its annual WIA formula grant. Conversely, if a state exceeds performance levels it may be eligible for incentive funds. State and local officials reported that the WIA performance measure that tracks the change in adult earnings after six months could limit training opportunities for employed workers, including low-wage workers. Some workforce officials were reluctant to register employed workers for training because the wage gain from unemployment to employment tended to be greater than the wage gain for employed workers receiving a wage increase or promotion as a result of skills upgrade training. For example, a state official from Indiana noted that upgrading from a certified nursing assistant to the next tier of the nursing field might only increase a worker’s earnings by 25 cents per hour. Yet, for the purposes of performance measures, workforce boards may need to indicate a change in earnings larger than this in order to avoid penalties. For example, one workforce official from Michigan reported that the performance measure requires the region to show an increase that equates to a $3.00 per hour raise. In a previous GAO study, states reported that the need to meet these performance measures may lead local staff to focus WIA-funded services on unemployed job seekers who are most likely to succeed in their job search or who are most able to make wage gains instead of employed workers. Time limits for some funding sources were a challenge for some officials trying to implement training programs, according to some state and local workforce officials. In Florida, for example, officials we visited reported that they had a state-imposed one-year time limit for using TANF funds for education and training, which made it difficult for officials to plan a training initiative, recruit eligible participants, and successfully implement the training program. Similarly, state and local officials we contacted in Oregon expressed frustration with the amount of effort required to ensure the continuation of funding for the length of their training initiative. They noted that funding for a one-year training grant for certified medical assistants and radiographers expired seven months before the training program ended. The local workforce board identified an approach to fund the training for the remainder of the program by using other funding sources. Although this workforce board was able to leverage other funds, this solution is not always feasible. Finally, several officials reported that eligibility requirements for the WIA local funds are a challenge because they might exclude some low-wage workers from training opportunities. States or local areas set the income limit for certain employment and training activities by determining the wage level required for individuals to be able to support themselves. When funds are limited, states and local areas must give priority for adult intensive and training services to recipients of public assistance and other low-income individuals. Officials on several workforce boards said that these eligibility guidelines for their local areas, particularly the income limit, made it challenging to serve some low-wage workers. For example, local workforce board officials from California indicated that they would like more flexibility than currently allowed under state WIA eligibility requirements to serve clients who may earn salaries above the income limit. The officials noted that some workers in need of skills upgrade could not be served under WIA because they did not qualify based on their income. To address this challenge, officials we visited at a local workforce board in East Texas told us that they set the income limit high enough so that they can serve most low-wage workers in their area. As of program year 2001, many states and local workforce boards were beginning to make use of the flexibility allowed under WIA and welfare reform to fund training for employed workers, including low-wage workers. They used WIA state set-aside funds and local funds, as well as TANF and state funds, as the basis for publicly funded training for employed workers. In addition, they considered business needs in determining how these funds were used to train employed workers. Consequently, training for employed workers could better reflect the skills that employers need from their workforce in a rapidly changing economy. In addition, such skills may help employees better perform in their jobs and advance in their careers. Training for employed workers is particularly critical for workers with limited education and work skills, especially those earning low wages. For such workers, obtaining training while employed may be critical to their ability to retain their jobs or become economically self-sufficient. While training low-wage workers involves particular challenges, workforce and other officials have developed ways to implement training initiatives for low-wage workers that may help mitigate some of these challenges. This is especially necessary in the economic downturn following the boom in the 1990s when TANF and WIA were created. However, WIA’s performance measure for the change in average earnings may create a disincentive for states and local workforce boards to fund training for employed workers because employed workers, particularly low-wage workers, may be less likely than unemployed workers to significantly increase their earnings after training. To the extent that state and local workforce investment areas focus on unemployed workers to ensure that they meet WIA’s performance measure for earnings change— and thereby avoid penalties—employed workers, and especially low-wage workers, may have a more difficult time obtaining training that could help them remain or advance in their jobs. As currently formulated, this performance measure supports earlier federal programs’ focus on training unemployed workers and does not fully reflect WIA’s new provision to allow federally funded training for employed workers. To improve the use of WIA funds for employed worker training, we recommend that the Secretary of Labor review the current WIA performance measure for change in adult average earnings to ensure that this measure does not provide disincentives for serving employed workers. For example, Labor might consider having separate average earnings gains measures for employed workers and unemployed workers. We provided the Departments of Labor and Health and Human Services with the opportunity to comment on a draft of this report. Formal comments from these agencies appear in appendixes IV and V. Labor agreed with our findings and recommendation to review the current WIA performance measure for change in the adult average earnings to ensure that the measure does not provide disincentives for serving employed workers. Labor stated that, in May 2002, the department contracted for an evaluation of the WIA performance measurement system and noted that one of the objectives of the evaluation is to determine the intended and unintended consequences of the system. Labor believes that GAO’s suggestion to have separate measures on earnings gains for employed workers would be an option to consider for improving WIA performance. HHS also agreed with the findings presented in our report and noted that the information in GAO’s report would help states develop and enhance appropriate worker training programs, and provide services and supports that address the barriers to such training. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will send copies of this report to relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix IV. To provide the Congress with a better understanding of how states and local areas were training employed workers, including low-wage workers, we were asked to determine (1) the extent to which local areas and states provide assistance to train employed workers, including funding training; (2) the focus of such training efforts and the kind of training provided; and (3) when targeting training to low-wage workers, the approaches state and local officials identified to address the challenges in training this population. To obtain this information, we conducted a nationwide mail survey of all local workforce investment boards, conducted semistructured telephone interviews with state officials, and visited four states. We conducted a literature search and obtained reports and other documents on employed worker training from researchers and federal, state, and local officials. To obtain information about the federal role in employed worker training, we met with officials from the departments of Labor, Health and Human Services (HHS), and Education. In addition, we interviewed researchers and other workforce development training experts from associations such as the National Governors’ Association, National Association of Workforce Investment Boards, U. S. Chamber of Commerce, and American Society for Training and Development. To document local efforts to train employed workers, we conducted a nationwide mail survey, sending questionnaires to all 595 local workforce boards. We received responses from 470 boards, giving us a 79 percent response rate. Forty-five states had response rates of 60 percent or more, and 17 states, including all states with a single workforce board, had response rates of 100 percent. The mailing list of local workforce boards was compiled using information from a previous GAO study of local youth councils, and directories from the National Association of Workforce Investment Boards and the National Association of Counties. The survey questionnaire was pretested with 6 local workforce boards and revised based on their comments. Surveys were mailed on April 24, 2002, follow- ups were conducted by mail and phone, and the survey closing date was August 16, 2002. We reviewed survey questionnaire responses for consistency and in several cases contacted the workforce boards to resolve inconsistencies but we did not otherwise verify the information provided in the responses. In the survey, we collected data for the WIA program years 2000 (from July 1, 2000—June 30, 2001) and 2001 (from July 1, 2001-June 30, 2002) so that we could compare and perceive trends. We analyzed these data by calculating simple statistics and by performing a content analysis in which we coded responses to open-ended questions for further analysis. Because our national mail survey did not use probability sampling, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the characteristics of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such non-sampling errors. For example, survey specialists in combination with subject matter specialists designed our questionnaire; we pretested the questionnaire to ensure that questions were clear and were understood by respondents; and to increase our response rate for the mail survey, we made a follow-up mailing and called local workforce investment boards that did not respond by a specified date. To determine state efforts to train employed workers, including low-wage workers, we conducted semistructured telephone interviews in 16 judgmentally selected states with state officials responsible for workforce development, economic development, and TANF funds used for education and training. We selected these states in part because they were geographically dispersed and represented about one-half of the U.S. population. In addition, we selected these states because between 1998 and 2001, most of them used federal funds available for training employed workers, including demonstration and planning grants, which potentially indicated the state’s interest in training these workers. Thirteen of the selected states received States’ Incumbent Worker System Building Demonstration Grants in 1998 from the Department of Labor; 10 of the selected states were identified in previous GAO work as having used WIA state set-aside funds for current worker training, and 8 of the selected states were among those receiving Employment Retention and Advancement (ERA) demonstration grants from the Department of Health and Human Services. (See table 2.) In each state, we interviewed state officials responsible for workforce development and economic development. We also interviewed state officials responsible for TANF funds used for education and training to obtain information about training for low-wage workers. To identify these state officials, we initially called the state contact for the WIA program. These officials then provided us with the names of officials or their designees who represented workforce development and economic development perspectives in their state. We similarly identified state officials responsible for TANF funds used for education and training. Since states structure their programs and funding differently, sometimes state officials we interviewed were located in different agencies while others were located in different offices within the same agency. For this reason we used the term “office” throughout the report to represent their different perspectives. We used survey specialists in designing our interview questions and pretested them in several states to ensure that they were clear and could be understood by those we interviewed. In our interviews, we asked state officials for information about training efforts for the program year 2000, which ended on June 30, 2001, and asked if there were any significant changes in program year 2001, which ended June 30, 2002. Our interviews with state officials were conducted between March and October 2002. In analyzing our interview responses from state officials, we calculated frequencies in various ways for all close-ended questions and arrayed and analyzed narrative responses thematically for further interpretation. We did not independently verify data, although we reviewed the interview responses for inconsistencies. To obtain in-depth information about the challenges that local officials have experienced in developing and implementing training programs specifically for low-wage workers, and promising approaches they identified to address these challenges, we made site visits to four states– Florida, Minnesota, Oregon, and Texas. We selected these four states for site visits to provide geographic dispersion and because federal and state officials and other experts had identified these states as having specific efforts for training employed workers, especially initiatives to help low- wage workers retain employment and advance in their jobs. Furthermore, each of the four states received federal HHS Employment Retention and Advancement grants. In our view, these demonstration grants served as indications of the state’s interest in supporting job retention and advancement, including training, for low-wage workers. We visited a minimum of two localities in each state, representing a mix of urban and rural areas in most cases. We chose local sites in each state on the basis of recommendations from state officials about training initiatives with a low- wage focus. Teams of at least three people spent from 2 to 4 days in each state. Typically, we interviewed local officials, including employers, one- stop staff, local workforce board staff, and training providers such as community colleges and private training organizations. We toured training facilities and observed workers and students receiving training. We also obtained and reviewed relevant documents from those we interviewed. (See table 3.) We reviewed surveys and telephone interview responses for consistency but we did not otherwise verify the information provided in the responses. Our work was conducted between October 2001 and December 2002 in accordance with generally accepted government auditing standards. Appendix III: Information on State Funding Sources While these states were awarded Employment Retention and Advancement grants from HHS, state officials we contacted did not identify these grants as sources of funding for employed worker training. Natalie S. Britton, Ramona L. Burton, Betty S. Clark, Anne Kidd, and Deborah A. Signer made significant contributions to this report, in all aspects of the work throughout the assignment. In addition, Elizabeth Kaufman and Janet McKelvey assisted during the information-gathering segment of the assignment. Jessica Botsford, Carolyn Boyce, Stuart M. Kaufman, Corinna A. Nicolaou, and Susan B. Wallace also provided key technical assistance. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. High-Skill Training: Grants from H-1B Visa Fees Meet Specific Workforce Needs, but at Varying Skill Levels. GAO-02-881. Washington, D.C.: September 20, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Coordination between TANF Programs and One-Stop Centers Is Increasing, but Challenges Remain. GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO-/T-HEHS-00-145. Washington, D.C.: June 29, 2000. Welfare Reform: Status of Awards and Selected States’ Use of Welfare-to- Work Grants. GAO/HEHS-99-40. Washington, D.C.: February 5, 1999.
Although training for employed workers is largely the responsibility of employers and individuals, the Workforce Investment Act (WIA) allowed state and local entities to use federal funds for training employed workers. Similarly, welfare reform legislation created Temporary Assistance for Needy Families (TANF) block grants and gave states greater flexibility to design training services for TANF clients to help them obtain and retain jobs. To better understand how the training needs of employed workers, including low-wage workers, is publicly supported, GAO was asked to determine (1) the extent to which local areas and states provide assistance to train employed workers, including funding training; (2) the focus of such training efforts and the kind of training provided; and (3) when targeting training to low-wage workers, the approaches state and local officials identified to address challenges in training this population. Nationwide, two-thirds of the 470 local workforce boards responding to our survey provided assistance to train employed workers, such as partnering with employers to develop training proposals or funding training. Nearly 40 percent specifically budgeted or spent funds on training these workers. The number of boards that reported funding training for employed workers varied by state, but most states had at least one workforce board that targeted funds on such training. At the state level, all 16 states that GAO contacted also funded training for employed workers. These states and local workforce boards reported funding training that addressed specific business and economic needs. Although many types of training for employed workers were funded, most often occupational training to upgrade skills, such as learning new computer applications, and basic skills training, such as in English and math, were emphasized and community or technical colleges were most frequently used to provide these services. In targeting training specifically for low-wage workers, state and local officials identified approaches to challenges that hindered individuals' and employers' participation in training. Officials developed approaches to address some of the personal issues that low-wage workers face that made participating in training difficult. They also developed ways to gain support from employers who were reluctant to participate in low-wage worker training, such as by partnering with employers to develop career paths that help retain employees within companies. However, officials reported that challenges to implementing successful training still exist. For example, they explained that the WIA performance measure that tracks the change in adult earnings after 6 months could limit training opportunities for employed workers, including low-wage workers. The wage gain for employed workers would not likely be as great as that for unemployed job seekers, and this might provide a disincentive to enrolling employed workers into training because their wage gain may negatively affect program performance.
Except for summer employees and some contractors, the scope of DEA background investigations was designed to assess whether individuals met the requirements to receive a “top-secret” clearance. DEA used the results of these background investigations to (1) help determine whether individuals were suitable for employment and (2) provide a basis for granting a security clearance. Employees with top-secret clearances can have access to information classified up to and including the top-secret level. The unauthorized disclosure of classified information can cause irreparable damage to the national interest and loss of human life. Unless otherwise provided by law, the investigation of a person entering or employed by the federal government in the competitive service, or by career appointment in the Senior Executive Service, is the responsibility of OPM. Agencies may request delegated authority from OPM to conduct or contract out investigations of their own employees and applicants. DEA obtained this authority from OPM in the early 1980s. The two agencies executed a Memorandum of Understanding and Agreement, which transferred authority to DEA and set forth the general requirements that DEA must follow. The memorandum has been renewed periodically, but the most recent one expired in September 1998. Nevertheless, OPM and DEA have continued to follow it, according to officials from both agencies. The Memorandum of Understanding and Agreement between OPM and DEA required DEA to follow the background investigation standards used by OPM. These standards held that background investigations, needed to provide employees a top-secret clearance, must meet investigation requirements established by Executive Order 12968, “Access to Classified Information.” This executive order directed the President’s Security Policy Board to develop a common set of investigative standards to be used by executive agencies for determining eligibility for access to classified information. The President approved the standards that the Board developed in March 1997. DEA’s background investigations were part of its personnel security program. DEA’s Office of Security Programs was responsible for operating the program and, in connection with that responsibility, was to provide policy guidance and management of background investigations. This office was responsible for ensuring that appropriate investigations were completed on applicants and employees as well as providing security adjudication services for DEA. As part of these adjudication services, this office used the results of background investigations to determine whether individuals were suitable for employment and whether a security clearance should be granted. In addition to DEA, OPM and DOJ both had responsibility for overseeing the program and DEA’s background investigations. and (2) all positions in the legislative and judicial branches of the federal government and in the government of the District of Columbia made subject to the civil service laws by statute. background investigations that were made for these renewals were referred to as reinvestigations. In fiscal year 1998, an estimated 5,583 background investigations were conducted of DEA applicants and employees. Of that number, about 3,401 were initial background investigations and another 2,182 were reinvestigations. Most of the investigations (about 74 percent) and all of the reinvestigations in 1998 were done by one contractor. However, DEA Special Agents conducted the background investigations of persons who applied for Special Agent positions, which accounted for about 26 percent of all initial background investigations. Based on investigative standards implementing Executive Order 12968, a typical background investigation for a top-secret clearance would include major investigative components such as proof of birth and citizenship for subjects and their immediate family a search of investigative files and other records held by federal agencies, including the FBI and CIA (referred to as a national agency check); financial review, including a credit bureau check; review of state and local law enforcement and court records (referred to as a local agency check); verification of recent education; record checks and personal testimony at places of employment; interviews of references including coworkers, employers, friends, educators, neighbors, and other individuals such as an ex-spouse; and a personal interview with the applicant. To identify and describe the circumstances that led DEA to consider relinquishing its delegated authority to conduct personnel background investigations, we interviewed cognizant officials of DEA, DOJ, and OPM. We obtained and reviewed the Memorandum of Understanding and Agreement between DEA and OPM regarding this authority. We obtained and reviewed all appraisals of DEA’s personnel security program and/or the quality of background investigations done by OPM and DOJ since 1992, when DEA was first appraised by OPM as a separate DOJ component. We did not review individual background investigations or DEA’s personnel security program. We also did not determine whether any employee who received a security clearance based on a deficient background investigation would have been denied clearance if the investigation had been performed according to required standards. We obtained and reviewed an internal DEA assessment of its personnel security program. We also obtained and reviewed relevant correspondence between DEA, DOJ, and OPM related to DEA’s security program and its background investigations. To assess whether OPM acted in an independent and objective manner in choosing to review DEA’s background investigations and security program, we applied three criteria posed in the following questions: What was OPM’s responsibility for reviewing background investigations performed by DEA and/or its contractors? Did the frequency of OPM’s reviews seem reasonable given the state of DEA’s background investigations and program? Was the frequency of OPM’s oversight activities at other agencies with delegated authority similar or dissimilar to the frequency of OPM’s oversight at DEA? For this second objective, we reviewed Executive Order 10450, “Security Requirements for Government Employment,” which among other things specified OPM’s responsibilities for reviewing federal agencies’ personnel security programs. We also identified all agencies, in addition to DEA, that had received delegated authority from OPM to perform background investigations. We compared OPM’s oversight activities—the frequency of reviews and the results—to OPM’s oversight activities at DEA. We requested comments on a draft of this report from the Attorney General of the United States on behalf of DOJ and DEA. We also requested comments from the Director, OPM. OPM’s comments are discussed near the end of this letter and are reprinted in appendix I. DOJ orally provided technical and clarifying comments, which we incorporated into this report. We did our work in Washington, D.C., from May through July 1999 in accordance with generally accepted government auditing standards. As of July 1999, DEA was considering whether to relinquish its personnel- security background investigation authority to OPM. It had been brought to this point by the deficiencies found by OPM over much of the decade and because of an assessment DOJ made in 1997. DOJ initiated discussions with DEA in late 1998 about relinquishing its authority. Partially in response to this initiative, DEA conducted an assessment and concluded that it lacked the expertise and resources to capably perform or oversee all of its background investigations. Through a Memorandum of Understanding and Agreement with OPM, DEA was required to forward all background investigation reports to OPM when they were completed. OPM was required to review samples of reports to determine whether investigative requirements called for by the agreement were met. In addition to reviewing completed investigation reports, OPM was required to assess DEA’s overall personnel security program under which background investigations were conducted. OPM’s reviews of background investigation reports submitted by DEA continually found the investigations deficient. Between 1996 and 1998, OPM reviewed a total of 265 background investigations conducted by DEA and its contractors. OPM found all but one investigation deficient (i. e., all but one failed to fully comply with OPM investigative requirements, which DEA agreed to follow). Some of these background investigations contained a single deficiency while others contained more than one deficiency. There was no readily available tabulation of the deficiencies for all 264 investigations found deficient and the nature of those deficiencies. However, some information was available. The 49 DEA investigative reports that OPM found deficient in 1998 contained 221 deficiencies. Six reports contained one deficiency, and the remaining 43 reports contained multiple deficiencies. The types of deficiencies OPM identified include not determining the nature and extent of contact between a personal source and the subject of the investigation; gaps in coverage of the verification, through personal sources, of all of the subject’s major activities, unemployment, and means of support; lack of or inadequate follow-up of issues admitted during the personal interview or disclosed on the Questionnaire for National Security Positions; failure to search Central Intelligence Agency files related to subject’s foreign-born status or foreign travel; failure to provide information from public sources that was complete, such as bankruptcies, financial matters, and divorce; neglecting to supply verification of subject’s citizenship through Immigration and Naturalization Service searches; and failure to obtain appropriate verification of an individual’s name, date of birth, and place of birth through state and local bureaus of vital statistics. Generally, there is no standard for stating how serious a deficiency might be or what type is the most serious, because the deficiencies generally are errors of omission, such as failing to check a law enforcement record. Ultimately, a deficiency’s seriousness depends on what type of activity might have been found if the appropriate search had been conducted or if a particular investigative technique had been used. Also, a seemingly less serious deficiency may provide an investigative lead that uncovers activity that might compromise the nation’s security interest. OPM returned the reports that it found deficient to DEA for further work and correction. However, in 1998, when OPM followed up on the deficient reports that it identified in 1996 and 1997, OPM generally found that DEA had not corrected the deficiencies. OPM also found that even though the background investigations were deficient, DEA still granted security clearances. In addition to its periodic review of investigations, OPM also reviewed DEA’s overall personnel security program in 1992 and again 6 years later in 1998. OPM found numerous deficiencies in 1992, and it found that DEA still had not corrected most of those deficiencies in 1998. The OPM findings include the following: The reinvestigation program did not effectively identify employees who were subject to routine reinvestigations. At DEA, employees were required to have their security clearances renewed every 5 years. Many employees in “Critical Sensitive/Top-Secret” positions were overdue for reinvestigation. DEA’s Planning and Inspection Manual provisions were insufficient because they did not include pertinent OPM and DOJ regulatory guidelines. The manual, among other deficiencies, failed to incorporate administrative due process guidelines for applicants, employees, and contract employees to appeal the denial or revocation of a security clearance. Physical security safeguards for the storage and protection of investigative files were insufficient. Personnel security adjudicators whose job was to decide who would be granted security clearances needed additional training and oversight. DEA’s Background Investigation Handbook did not include mandatory OPM investigative requirements. DEA did not forward copies of all its completed background investigations to OPM, as required by the conditions of its delegated authority. In addition to OPM reviews, DEA’s security program was subject to compliance reviews by DOJ, which was responsible for the development, supervision, and administration of security programs within the department. In 1997, DOJ audited the DEA program and reported the results to DEA. Based on the results of this review and OPM’s reviews, DOJ initiated discussions with DEA in 1998 on relinquishing its background investigation authority to OPM. DOJ’s audit identified deficiencies that were similar to those that OPM identified in its review of DEA’s security program in 1992. OPM also found the same sort of deficiencies in 1998 after the DOJ audit. The DOJ findings identified issues and deficiencies in (1) periodic reinvestigations; (2) background investigations; (3) due process procedures; (4) resources for monitoring, tracking, and controlling the investigation process; (5) adjudication (process for deciding whether security clearances should be granted); and (6) staff competence. DOJ referred to its findings as critical security issues and deficiencies. In October 1998, the Assistant Attorney General for Administration wrote to the DEA Administrator expressing his belief that DEA’s investigative function should be relinquished to OPM but said as well that he would like to hear the DEA Administrator’s comments. The memorandum was based on the DOJ audit and on the recurring findings of OPM. In that memorandum, DOJ’s Assistant Attorney General also expressed concern with what DOJ saw as DEA’s inability to maintain an effective overall personnel security program. This inability came about, the memorandum stated, because resources were consumed in doing certain functions—checking federal records and performing quality control—that OPM performed when doing background investigations for other agencies. OPM checked the files of various federal agencies, such as the investigative and criminal history files of the FBI, by computer. Unlike OPM, DEA lacked the extensive computer links to federal files and did many file checks manually. Checking federal files were referred to as National Agency Checks in background investigations. In the Spring of 1999, DEA assessed its personnel security program, concentrating on background investigations. This assessment, according to DEA officials, was done in response to the Assistant Attorney General for Administration’s October memorandum, subsequent meetings with DOJ officials, and DEA’s own awareness of the condition of its personnel security program. The assessment covered areas such as the (1) results of reviews performed by OPM and DOJ, (2) requirements of the Memorandum of Understanding and Agreement with OPM, (3) efforts to correct deficiencies with the security program and background investigations, (4) contract with the company that currently did background investigations for DEA, and (5) other management issues related to background investigations. Although its assessment noted efforts to resolve concerns raised by OPM and DOJ, DEA identified several issues that led to the conclusion that it did not have the capability to effectively perform or oversee background investigations. It also concluded that some security clearances were granted based on deficient background investigations. As of July 1999, DEA was considering whether to relinquish its background investigation authority to OPM. Following are some of the issues that the assessment identified, which led to DEA’s conclusion that it had not effectively performed or overseen background investigations. DEA had historically failed to capably perform or oversee its background investigations. DEA found that the majority of people working in its personnel security unit had not been adequately trained regarding the laws, regulations, executive orders, policies, and technical practices central to initiating, and performing and overseeing background investigations, as well as providing personnel security adjudicative services to DEA. DEA had not ensured, as required by the conditions of its delegated authority, that each investigator performing investigations under its delegation had been screened by an investigation that met no less than OPM’s top-secret clearance requirements. DEA did not comply with this requirement for its current contractor because DEA did not have funds to finance such investigations. DEA had not developed or implemented an integrity follow-up program to monitor contract investigators, as required under its delegated authority. DEA concluded that under current circumstances without relief that OPM would provide, it was likely that DEA would remain in violation of the integrity follow-up program requirement. DEA personnel performed National Agency Checks, a requirement of each background investigation. DEA’s costs for performing these checks was associated with DEA’s need to conduct many of these checks manually. In its self-assessment, DEA stated that OPM, however, had sophisticated computer facilities that permitted it to conduct required National Agency Checks through direct-access computer links with all the relevant agencies. DEA concluded that it saw no advantage to duplicate a capability that already existed in OPM. DEA bears ultimate responsibility for ensuring that background investigations performed under its delegation from OPM conform to mandated investigative criteria. DEA had been heavily criticized for its performance in this regard. DEA concluded that OPM has a fully qualified and experienced quality-control staff and that it was not reasonable for DEA to continue to attempt to duplicate this capability. As of July 1999, DEA had not made a final decision on relinquishing its background investigation authority. From what DEA officials told us, it was considering retaining the authority to investigate individuals who apply for DEA Special Agent positions but relinquishing the authority to do all other background investigations, including periodic reinvestigations of Special Agents. In his October 1998 memorandum, the Assistant Attorney General for Administration said that he believed that DEA should relinquish all authority, including the authority to investigate the backgrounds of Special Agent applicants. According to DEA, relinquishing all other background investigation authority would allow DEA to redirect resources into the investigative process for Special Agent applicants. The redirected resources would go into increased training, policy guidance, and oversight. DEA said it believed that it would be unwise to segregate the background investigation from the overall Special Agent applicant selection process by having them conducted by an independent entity not familiar with DEA’s unique requirements for Special Agents. Special Agents did the background investigations of applicants and would continue to do these investigations if that authority was retained, according to DEA. DEA would not be the first agency to relinquish background investigation authority to OPM. According to an OPM official, five agencies have done so: (1) the Federal Emergency Management Agency in 1991, (2) the Department of Commerce in 1994, (3) the National Aeronautics and Space Administration Office of Inspector General in 1994, (5) the U.S. Soldiers and Airmens Home in 1994, and (5) the Department of Education Office of Inspector General in 1998. As previously mentioned, OPM had a sole-source contract with USIS, a firm that OPM was instrumental in creating, to do all background investigations except those done by agencies under delegation agreements. If DEA were to relinquish its background investigation authority to OPM, the contract between OPM and USIS would require OPM to order this investigative work from USIS until the contract expired. Because of the relationship between OPM and USIS, we reviewed whether OPM acted in an objective and independent manner in choosing to review DEA’s background investigation reports and personnel security program. To gauge whether OPM acted objectively and independently, we considered OPM’s responsibilities towards the security program and the program’s background investigations and whether OPM’s treatment of DEA differed from its treatment of other agencies. OPM appeared to have acted in an objective and independent manner. OPM was required to review DEA’s personnel security program and background investigations. This requirement was contained in the Memorandum of Understanding and Agreement between OPM and DEA, which provided that OPM would monitor the agreement as part of its security program appraisal process. In addition, Executive Order 10450, “Security Requirements for Government Employment,” required OPM to make a continuing study of the order’s implementation. The purpose of this continuing study is to determine whether deficiencies exist in security programs that could harm the national interest and weaken national security. As already noted, OPM repeatedly found deficiencies in both the security program and the background investigations, which DEA usually did not correct, and DEA concluded that it could not capably perform or oversee background investigations. Given DEA’s history of noncompliance, we believe that it was reasonable for OPM to do reviews of DEA’s investigations. The frequency with which OPM reviewed DEA’s investigation program appeared to be generally in line with the frequency with which OPM reviewed other agencies. In addition to DEA, three other agencies—the U.S. Marshals Service, the Small Business Administration, and the U.S. Customs Service—possessed authority delegated from OPM to conduct background investigations in fiscal year 1999. OPM reviewed the security program of the U.S. Marshals Service in 1989 and 1999 (in progress as of July 1999), the Customs Service in 1989 and 1994, and the Small Business Administration in 1983 and 1992. In comparison, it reviewed the DEA program in 1992 and followed up in 1998. OPM reviewed a sample of the background investigation reports of the U.S. Marshals Service and the Small Business Administration from July 1996 through April 1999, as it did for DEA. According to an OPM official, OPM did not routinely review the background investigation reports of the U.S. Customs Service because the Memorandum of Understanding and Agreement delegating the investigative authority to Customs did not include this requirement. However, one OPM review of 89 Customs investigations, completed in 1993, found 46 percent to be deficient. OPM was critical in its assessment of other agencies, as it was with DEA. For the aggregate samples of background investigation reports that OPM reviewed from July 1996 through April 1999, the rate of deficiency for the Small Business Administration was 75 percent. It was 93 percent for those from the U.S. Marshals Service. In comparison, the rate of deficiency for background investigation reports from DEA, which DEA and two contractors prepared over several years (1996 to 1999), was 98 percent. OPM computed these percentages by dividing the total number of reports it reviewed into the number it found deficient. Rather than raising a question regarding DEA’s independence and objectivity in choosing to review background investigations performed by DEA and its contractors, the evidence raises the question of why OPM did not act to rescind DEA’s delegated authority. According to OPM, the Administration announced in late 1994 that OPM’s Investigative Unit was to be privatized. The privatization occurred in July 1996. During that period, two private investigative firms sued OPM. According to OPM, these firms believed that OPM was going to take work away from them to support its privatized contractor. The suits were settled when OPM agreed, among other things, that it would not rescind delegations of authority, such as the DEA delegation, except for unsatisfactory performance. Also during this period, a former director of OPM testified before Congress on its privatization plans and emphasized that OPM did not intend to rescind any delegated authorities in order to give new business to the privatized company. According to OPM, the agency has been sensitive to these commitments as well as to the potential perceptions of OPM’s motivation for rescinding any such delegation. We have not evaluated OPM’s explanation of this situation. However, at your request we are separately reviewing related issues concerning OPM’s oversight function regarding background investigations. DEA had a long history of deficiencies in its personnel security program, including background investigations done by both contractor and agency employees that did not meet federal standards. Federal agency security programs are aimed at protecting national security interests and are predicated on thoroughly reviewing the backgrounds of federal job applicants and employees to ensure their suitability for employment and/or access to national security information. Given DEA’s difficulties in ensuring the quality of its personnel background investigations and its conclusion that it is not able to capably perform or oversee background investigations, its consideration of relinquishing its delegated authority is not unreasonable. Nor do OPM’s periodic appraisals of DEA background investigations for adherence to prescribed standards appear unreasonable. OPM has a mandated responsibility to oversee agency security programs, including background investigations, and appeared not to have treated DEA significantly differently, in terms of oversight from other agencies with delegated authority. We received written comments on a draft of this report from the Director of OPM and oral comments on August 17, 1999, from the Director, Audit Liaison Office, DOJ. The OPM Director said that she was pleased that we concluded that OPM was objective and independent in its oversight of the DEA personnel security program. Regarding the report’s statement that the evidence raises a question of why OPM did not rescind DEA’s delegated authority, the Director said that OPM had worked with DEA over several years to help it correct deficiencies that OPM had identified and that several factors mitigated against the rescission of DEA’s authority. In addition to the factors cited on page 13 of this report, OPM said that it continued to work with DEA and DOJ to resolve the continuing personnel security problems and that OPM had let a reasonable amount of time elapse for DOJ, which is responsible for all of the department’s security programs, to take the necessary action. In October 1998, DOJ advised DEA to relinquish its authority. OPM’s complete comments are reprinted in appendix I. The DOJ Audit Liaison Director orally provided technical and clarifying comments, which we incorporated into this report. The Audit Liaison Director said that DOJ had no other comments. We are sending copies of this report to Senators Daniel K. Akaka, Robert C. Byrd, Ben Nighthorse Campbell, Thad Cochran, Susan M. Collins, Byron L. Dorgan, Richard J. Durbin, Judd Gregg, Orrin G. Hatch, Ernest F. Hollings, Patrick J. Leahy, Carl Levin, Joseph I. Lieberman, Charles E. Schumer, Ted Stevens, Fred Thompson, Strom Thurmond, and George V. Voinovich and Representatives Dan Burton, John Conyers, Jr., Elijah Cummings, Jim Kolbe, Steny H. Hoyer, Henry J. Hyde, Bill McCollum, John L. Mica, Patsy T. Mink, David Obey, Harold Rogers, Joe Scarborough, Robert C. Scott, Jose E. Serrano, Henry A. Waxman, and C. W. Bill Young in their capacities as Chair or Ranking Minority Members of Senate and House Committees and Subcommittees. We will also send copies to the Honorable Janet Reno, Attorney General of the United States, Department of Justice; The Honorable Janice R. Lachance, Director, Office of Personnel Management; and Mr. Donnie R. Marshall, Acting Administrator, Drug Enforcement Administration, Department of Justice and other interested parties. We will make copies of this report available to others on request. If you have any questions regarding this report, please contact me or Richard W. Caradine at (202) 512- 8676. Key contributors to this assignment were John Ripper and Anthony Assia. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on background investigations conducted by the Drug Enforcement Administration (DEA), focusing on: (1) the circumstances that led DEA to consider relinquishing its authority to conduct personnel background investigations; and (2) whether the Office of Personnel Management (OPM) acted in an independent and objective manner in choosing to review DEA and its background investigations. GAO noted that: (1) a series of evaluations in the 1990s critical of DEA's background investigations and personnel security program caused DEA to consider relinquishing its background investigation authority; (2) the findings of OPM's assessments over much of the 1990s, an assessment by the Department of Justice (DOJ) in 1998, and its own assessment in 1999 triggered DEA's consideration of this issue; (3) DEA's relinquishment of investigation authority would be consequential because DEA and its contractor performed an estimated 5,600 background investigations in 1998; (4) during the late 1990s, OPM reviewed a sample of 265 background investigation reports prepared by DEA and its contractors and determined that all but 1 investigation was deficient in meeting the investigative requirements that DEA had agreed to follow; (5) DOJ audited DEA's personnel security program in 1997 and found deficiencies similar to what OPM had found in 1992 and again in 1998; (6) based on the DOJ audit and the recurring finds of OPM, DOJ's Assistant Attorney General for Administration told the DEA Administrator in October 1998 that he believed that DEA should relinquish all of its background investigation authority to OPM; (7) in early 1999, DEA conducted its own examination of the personnel security program, focusing on background investigations, and concluded that DEA was not able to capably perform or oversee background investigations; (8) this lack of capability allowed security clearances to be granted, regardless of whether the related background investigations were adequate; (9) DEA had allowed contract investigators to perform background investigations, even though the investigators had not gone through required background investigations because DEA did not have funds to finance such investigations; (10) as of July 1999, subsequent to its examination of the personnel security program, DEA was considering relinquishing its authority for background investigations to OPM, except for the authority to investigate backgrounds of applicants for DEA Special Agent positions; (11) DEA believed that it would be unwise to separate the background investigation from the overall applicant selection process by having them conducted by an independent entity not familiar with DEA's unique requirements for Special Agents; and (12) OPM appeared to have been objective and independent in choosing to review DEA's personnel security program and background investigations.
The current model for regulation and oversight of the accounting profession involves federal and state regulators and a complex system of self-regulation by the accounting profession. The functions of the model are interrelated and their effectiveness is ultimately dependent upon each component working well. Basically, the current model includes: licensing members of the accounting profession to practice within the jurisdiction of a state, as well as issuing rules and regulations governing member conduct, which is done by the various state boards of accountancy; setting accounting and auditing standards, which is done by the Financial Accounting Standards Board (FASB) and the Auditing Standards Board (ASB), respectively, through acceptance of the standards by the SEC; setting auditor independence rules, which within their various areas of responsibility, have been issued by the American Institute of Certified Public Accountants (AICPA), the SEC, and GAO; and oversight and discipline, which is done through a variety of self- regulatory and public regulatory systems (e.g., the AICPA, the SEC, and various state boards of accountancy). Enron’s failure and a variety of other recent events has brought a direct focus on how well the current systems of regulation and oversight of the accounting profession are working in achieving their ultimate objective that the opinions of independent auditors on the fair presentation of financial statements can be relied upon by investors, creditors, and the various other users of financial reports. The issues currently being raised about the effectiveness of the accounting profession’s self-regulatory system are not unique to the collapse of Enron. Other business failures, restatements of financial statements, and the proliferation of pro forma earnings assertions over the past several years have called into question the effectiveness of the current system. A continuing message is that the current self-regulatory system is fragmented, is not well coordinated, and has a disciplinary function that is not timely, nor does it contain effective sanctions, all of which create a public image of ineffectiveness. In addressing these issues, proposals should consider whether overall the system creates the right incentives, transparency, and accountability, and operates proactively to protect the public interest. Also, the links within the self-regulatory system and with the SEC and the various state boards of accountancy (the public regulatory systems) should be considered as these systems are interrelated, and weaknesses in one component can put strain on the other components of the overall system. I would now like to address some of the more specific areas of the accounting profession’s self-regulatory system that should be considered in forming and evaluating proposals to reshape or overhaul the current system. The accounting profession’s current self-regulatory system for public company audits is heavily reliant on the AICPA through a system that is largely composed of volunteers from the accounting profession. This system is used to set auditing standards and auditor independence rules, monitor member public accounting firms for compliance with professional standards, and discipline members who violate auditing standards or independence rules. AICPA staff support the volunteers in conducting their responsibilities. In 1977, the AICPA, in conjunction with the SEC, administratively created the Public Oversight Board (POB) to oversee the peer review system established to monitor member public accounting firms for compliance with professional standards. In 2001, the oversight authority of the POB was expanded to include oversight of the ASB. The POB had five public members and professional staff, and received its funding from the AICPA. On January 17, 2002, the SEC Chairman outlined a proposed new self- regulatory structure to oversee the accounting profession. The SEC’s proposal provided for creating an oversight body that would include monitoring and discipline functions, have a majority of public members, and be funded through private sources, although no further details were announced. The POB’s Chairman and members were critical of the SEC’s proposal and expressed concern that the Board was not consulted about the proposal. On January 20, 2002, the POB passed a resolution of intent to terminate its existence no later than March 31, 2002, leaving a critical oversight function in the current self-regulatory system unfilled. However, the POB’s Chairman has stated that the Board will work to assist in transitioning the functions of the Board to whatever new regulatory body is established. In that respect, the SEC announced on March 19, 2002, that a Transition Oversight Staff, led by the POB’s executive director, will carry out oversight functions of the POB. However, on April 2, 2002, the POB members voted to extend the POB through April 30, 2002, to provide additional time solely to finalize certain POB administrative matters and to facilitate a more orderly transition of oversight activities. The issues of fragmentation, ineffective communication, and limitations on discipline surrounding the accounting profession’s self-regulatory system strongly suggest that the current self-regulatory system is not adequate in effectively protecting the public’s interest. We believe these are structural weaknesses that require congressional action. Specifically, we believe that the Congress should create an independent statutory federal government body to oversee financial audits of public companies. The functions of the new independent body should include: establishing professional standards (auditing standards, including standards for attestation and review engagements; independence standards; and quality control standards) for public accounting firms and their key members who audit public companies; inspecting public accounting firms for compliance with applicable investigating and disciplining public accounting firms and/or individual auditors of public accounting firms who do not comply with applicable professional standards. As discussed later, this new body should be independent from but should closely coordinated with the SEC in connection with matters of mutual interest. In addition, we believe that the issues concerning accounting standard-setting can best be addressed by the SEC working more closely with the FASB rather than putting that function under the new body. The powers/authority of the new body should include: requiring all public accounting firms and audit partners that audit financial statements, reports, or other documents of public companies that are required to be filed with the SEC to register with the new body; issuing professional standards (e.g., independence) along with the authority to adopt or rely on existing auditing standards, including standards for attestation and review engagements, issued by other professional bodies (e.g., the ASB); enforcing compliance with professional standards, including appropriate investigative authority (e.g., subpoena power and right to maintain the confidentiality of certain records) and disciplinary powers (e.g., authority to impose fines, penalties, and other sanctions, including suspending or revoking registrations of public accounting firms and individual auditors to perform audits of public companies); requiring the new body to coordinate its compliance activities with the SEC and state boards of accountancy; requiring auditor reporting on the effectiveness of internal control over financial reporting; requiring the new body to promulgate various auditor rotation requirements for key public company audit engagement personnel (i.e., primary and second partners, and engagement managers); requiring the new body to study and report to the Congress on the pros and cons of any mandatory rotation of accounting firms that audit public companies, and take appropriate action; establishing annual registration fees and possibly inspection fees necessary to fund the activities of the new body on an independent and self-sustaining basis; and establishing rules for the operation of the new body. The new body should be created by statute as an independent federal government body. To facilitate operating independently, the new body’s board members should be highly qualified and independent from the accounting profession, its funding sources should not be dependent on voluntary contributions from the accounting profession, and it should have final approval for setting professional standards and its operating rules. In that respect, the new body would have independent decisionmaking authority from the SEC. It would approve professional standards, set sanctions resulting from disciplinary actions, and establish its operating rules. At the same time, it should coordinate and communicate its activities with the SEC and the various state boards of accountancy. The new body should set its own human resource and other administrative requirements and should be given appropriate flexibility to operate as an independent entity and to provide compensation that is competitive to attract highly competent board members and supporting staff. The new body should also have adequate staff to effectively discharge its responsibilities. Candidates for board membership could be identified through a nominating committee that could include the Chairman of the Federal Reserve, Chairman of the SEC, the Secretary of the Treasury, and the Comptroller General of the United States. The number of board members could be 5 or 7 and have stated terms, such as 5 years with a limited renewal option, and the members’ initial terms should be staggered to ensure some continuity. The members of the board should be appointed by the President and confirmed by the U.S. Senate. At a minimum, the chair and vice-chair should serve on a full-time basis. Importantly, board members should be independent of the accounting profession. In that regard, board members should not be active accounting profession practitioners and a majority of board members must not have been accounting profession practitioners within the recent past (e.g., 3 years). The new body should have sources of funding independent of the accounting profession. The new body could have authority to set annual registration fees for public companies. It could also have authority to set fees for services, such as inspections of public accounting firms, and authority to charge for copies of publications, such as professional standards and related guidance. The above fees and charges should be set to recover costs and sustain the operations of the new body. For accountability, we believe the new body should report annually to the Congress and the public on the full-range of its activities, including setting professional standards, inspections of public accounting firms, and related disciplinary activities. Such reporting also provides the opportunity for the Congress to conduct oversight of the performance of the new body. The Congress also may wish to have GAO review and report on the performance of the new body after the first year of its operations and periodically thereafter. Accordingly, we suggest that the Congress provide GAO not only access to the records of the new body, but also to the records of accounting firms and other professional organizations that may be needed for GAO to assess the performance of the new body. For over 70 years, the public accounting profession, through its independent audit function, has played a critical role in enhancing a financial reporting process that has supported the effective functioning of our domestic capital markets, which are widely viewed as the best in the world. The public’s confidence in the reliability of issuers’ financial statements, which relies in large part on the role of independent auditors, serves to encourage investment in securities issued by public companies. This sense of confidence depends on reasonable investors perceiving auditors as independent expert professionals who have neither mutual, nor conflicts of, interests in connection with the entities they are auditing. Accordingly, investors and other users expect auditors to bring to the financial reporting process integrity, independence, objectivity, and technical competence, and to prevent the issuance of misleading financial statements. Enron’s failure and certain other recent events have raised questions concerning whether auditors are living up to the expectations of the investing public; however, similar questions have been raised over a number of years due to significant restatements of financial statements and certain unexpected and costly business failures, such as the savings and loan crisis. Issues debated over the years continue to focus on auditor independence concerns and the auditor’s role and responsibilities. Public accounting firms providing nonaudit services to their audit client is one of the issues that has again surfaced by Enron’s failure and the large amount of annual fees collected by Enron’s independent auditor for nonaudit services. Auditors have the capability of performing a range of valuable services for their clients, and providing certain nonaudit services can ultimately be beneficial to investors and other interested parties. However, in some circumstances, it is not appropriate for auditors to perform both audit and certain nonaudit services for the same client. In these circumstances, the auditor, the client, or both will have to make a choice as to which of these services the auditor will provide. These concepts, which I strongly believe are in the public’s interest, are reflected in the revisions to auditor independence requirements for government audits, which GAO recently issued as part of Government Auditing Standards. The new independence standard has gone through an extensive deliberative process over several years, including extensive public comments and input from my Advisory Council on Government Auditing Standards. The standard, among other things, toughens the rules associated with providing nonaudit services and includes a principle-based approach to addressing this issue, supplemented with certain safeguards. The two overarching principles in the standard for nonaudit services are that: auditors should not perform management functions or make auditors should not audit their own work or provide nonaudit services in situations where the amounts or services involved are significant or material to the subject matter of the audit. Both of the above principles should be applied using a substance over form doctrine. Under the revised standard, auditors are allowed to perform certain nonaudit services provided the services do not violate the above principles; however, in most circumstances certain additional safeguards would have to be met. For example, (1) personnel who perform allowable nonaudit services would be precluded from performing any related audit work, (2) the auditor’s work could not be reduced beyond the level that would be appropriate if the nonaudit work were performed by another unrelated party, and (3) certain documentation and quality assurance requirements must be met. The new standard includes an express prohibition regarding auditors providing certain bookkeeping or record keeping services and limits payroll processing and certain other services, all of which are presently permitted under current independence rules of the AICPA. However, our new standard allows the auditor to provide routine advice and technical assistance on an ongoing basis and without being subject to the additional safeguards. The focus of these changes to the government auditing standards is to better serve the public interest and to maintain a high degree of integrity, objectivity, and independence for audits of government entities and entities that receive federal funding. However, these standards apply only to audits of federal entities and those organizations receiving federal funds, and not to audits of public companies. In the transmittal letter issuing the new independence standard, we expressed our hope that the AICPA would raise its independence standards to those contained in this new standard in order to eliminate any inconsistency between this standard and their current standards. The AICPA’s recent statement before another congressional committee that the AICPA will not oppose prohibitions on auditors providing certain nonaudit services seems to be a step in the right direction. The independence of public accountants is crucial to the credibility of financial reporting and, in turn, the capital formation process. Auditor independence standards require that the audit organization and the auditor be independent both in fact and in appearance. These standards place responsibility on the auditor and the audit organization to maintain independence so that opinions, conclusions, judgments, and recommendations will be impartial and will be viewed as being impartial by knowledgeable third parties. Because independence standards are fundamental to the independent audit function, as part of its mission, the new independent and statutorily created government body, which I previously discussed, should be responsible for setting independence standards for audits of public companies, as well as the authority to discipline members of the accounting profession that violate such standards. First, I want to underscore that serving on the board of directors of a public company is an important and difficult responsibility. That responsibility is especially challenging in the current environment with increased globalization and rapidly evolving technologies having to be addressed while at the same time meeting quarterly earnings projections in order to maintain or raise the market value of the company’s stock. These pressures and related executive compensation arrangements unfortunately often translate to a focus on short-term business results. This can create perverse incentives, such as attempts to manage earnings to report favorable short- term financial results, and/or failing to provide adequate transparency in financial reporting that disguises risks, uncertainties, and/or commitments of the reporting entity. On balance though, the difficulty of serving on a public company’s board of directors is not a valid reason for not doing the job right, which means being knowledgeable of the company’s business, asking the right questions, and doing the right thing to protect not only shareholders, but also the public’s interest. At the same time it is important to strike a reasonable balance between the responsibilities, risks, and rewards of board and key committee members. To do otherwise would serve to discourage highly qualified persons from serving in these key capacities. A board member needs to have a clear understanding of who is the client being served. Namely, their client should be the shareholders of the company, and all their actions should be geared accordingly. They should, however, also be aware of the key role that they play in maintaining public confidence in our capital markets system. Audit committees have a particularly important role to play in assuring fair presentation and appropriate accountability of management in connection with financial reporting, internal control, compliance, and related matters. Furthermore, boards and audit committees should have a mutuality of interest with the external auditor to assure that the interest of shareholders are adequately protected. There are a number of steps that can be taken to enhance the independence of audit committees and their working relationship with the independent auditor to further enhance the effectiveness of the audit in protecting the public’s interest. We believe that the SEC in conjunction with the stock exchanges should initially explore such actions. Therefore, any legislative reform could include a requirement for the SEC to work with the stock exchanges to enhance listing requirements for public companies to improve the effectiveness of audit committees and public company auditors, including considering whether and to what extent: audit committee members should be both independent of the company and top management and should be qualified in the areas related to their responsibilities such as accounting, auditing, finance, and the SEC reporting requirements; audit committees should have access to independent legal counsel and other areas of expertise, such as risk management and financial instruments; audit committees should hire the independent auditors, and work directly with the independent auditors to ensure the appropriate scope of the audit, resolution of key audit issues, compliance with applicable independence standards, and the reasonableness and appropriateness of audit fees. In this regard, audit committees must realize that any attempts to treat audit fees on a commodity basis can serve to increase the risk and reduce the value of the audit to all parties; audit committees should pre-approve all significant nonaudit services; audit committees should pre-approve the hiring of the public companies’ key financial management officials (such as the chief financial officer, chief finance officer or controller) or the providing of financial management services if within the previous 5 years they had any responsibility for auditing the public company’s financial statements, reports, or other documents required by the SEC; and audit committees should report to the SEC and public on their membership, qualifications, and execution of their duties and responsibilities. We also believe that the effectiveness of boards of directors and committees, including their working relationship with management of public companies, can be enhanced by the SEC working with the stock exchanges to enhance certain other listing requirements for public companies. In that respect, the SEC could be directed to work with the stock exchanges to consider whether and to what extent: audit committees, nominating committees, and compensation committees are qualified, independent, and adequately resourced to perform their responsibilities; boards of directors should approve management’s code of conduct and any waivers from the code of conduct, and whether any waivers should be reported to the stock exchanges and the SEC; boards of directors should approve the hiring of key financial management officials who within the last 2 years had any responsibility for auditing the public company’s financial statements, reports, or other documents required by the SEC; and CEOs should serve as the chairman of public company boards. Also, to further protect shareholders and the public interest, the SEC could be directed to report (1) within 180 days from enactment of legislation on other actions it is taking to enhance the overall effectiveness of the current corporate governance structure, and (2) periodically on best practices and recommendations for enhancing the effectiveness of corporate governance to protect both shareholders and the public’s interest. We believe that the issues raised by Enron’s sudden failure and bankruptcy regarding whether analyst’s independence from issuers’ of stock is affecting their suggested buy and sell recommendations can be addressed by requiring the SEC to work with the National Association of Securities Dealers (NASD) in connection with certain requirements. Accordingly, the SEC could be directed to work with the NASD to consider whether and to what extent: the firewalls between analysts and the business end of their firms should be widened to enhance analyst independence and to report to the Congress on the effectiveness of the regulations; disclosure of (1) whether the analyst’s firm does investment banking, and (2) whether there is a relationship with the company in question should be improved, and whether to report to the Congress on the effectiveness of the requirements; and implementing regulations to be enforced through an effective examination program should be required. The Congress may wish to have GAO evaluate and report to it one year after enactment of legislation and periodically thereafter on the (1) results of the SEC’s working relationship with the stock exchanges to strengthen corporate governance requirements, and (2) results of the SEC’s working relationship with the NASD in developing independence and conflict of interest requirements for analysts. Accordingly, we suggest that the Congress provide GAO access to the records of the securities self regulatory organizations, such as the New York Stock Exchange and the NASD, that may be needed for GAO to evaluate the SEC’s working relationships with these organizations. Business financial reporting is critical in promoting an effective allocation of capital among companies. Financial statements, which are at the center of present-day business reporting, must be timely, relevant, and reliable to be useful for decision-making. In our 1996 report on the accounting profession, we reported that the current financial reporting model does not fully meet users’ needs. More recently, we have noted that the current reporting model is not well suited to identify and report on key value and risk elements inherent in our 21st Century knowledge-based economy. The SEC is the primary federal agency currently involved in accounting and auditing requirements for publicly traded companies but has traditionally relied on the private sector for setting standards for financial reporting and independent audits, retaining a largely oversight role. Accordingly, the SEC has accepted rules set by the Financial Accounting Standards Board (FASB)—generally accepted accounting principles (GAAP)—as the primary standard for preparation of financial statements in the private sector. We found that despite the continuing efforts of FASB and the SEC to enhance financial reporting, changes in the business environment, such as the growth in information technology, new types of relationships between companies, and the increasing use of complex business transactions and financial instruments, constantly threaten the relevance of financial statements and pose a formidable challenge for standard setters. A basic limitation of the model is that financial statements present the business entity’s financial position and results of its operations largely on the basis of historical costs, which do not fully meet the broad range of user needs for financial information. Enron’s failure and the inquiries that have followed have raised many of the same issues about the adequacy of the current financial reporting model, such as the need for additional transparency, clarity, more timely information, and risk-oriented financial reporting. Among other actions to address the Enron-specific accounting issues, the SEC has requested that the FASB address the specific accounting rules related to Enron’s special purpose entities and related party disclosures. In addition, the SEC Chief Accountant has also raised concerns that the current standard-setting process is too cumbersome and slow and that much of the FASB’s guidance is rule-based and too complex. He believes that (1) a principle-based standards will yield a less complex financial reporting paradigm that is more responsive to emerging issues, (2) the FASB needs to be more responsive to accounting standards problems identified by the SEC, and (3) the SEC needs to give the FASB freedom to address the problems, but the SEC needs to monitor projects on an ongoing basis and, if they are languishing, determine why. We generally agree with the SEC Chief Accountant’s assessment. We also believe that the issues surrounding the financial reporting model can be effectively addressed by the SEC, in conjunction with the FASB, without statutorily changing the standard-setting process. However, we do believe that a more active and ongoing interaction between the SEC and the FASB is needed to facilitate a mutual understanding of priorities for standard- setting, realistic goals for achieving expectations, and timely actions to address issues that arise when expectations are not likely to be met. In that regard, the SEC could be directed to: reach agreement with the FASB on its standard-setting agenda, approach to resolving accounting issues, and timing for completion of projects; monitor the FASB’s progress on projects, including taking appropriate actions to resolve issues when projects are not meeting expectations; and report annually to the Congress on the FASB’s progress in setting standards, along with any recommendations, and the FASB’s response to the SEC’s recommendations. The Congress may wish to have GAO evaluate and report to it one year after enactment of legislation and periodically thereafter on the SEC’s performance in working with the FASB to improve the timeliness and effectiveness of the accounting standard-setting process. Accordingly, we suggest that the Congress provide GAO access to the records of the FASB that may be needed for GAO to evaluate the SEC’s performance in working with the FASB. The FASB receives about two-thirds of its funding from the sale of publications with the remainder of its funding coming from the accounting profession, industry sources, and others. One of the responsibilities of the FASB’s parent organization, the Financial Accounting Foundation, is to raise funds for the FASB and its standard-setting process to supplement the funding that comes from the FASB’s sale of publications. Some have questioned whether this is the best arrangement to ensure the independence of the standard-setting process. This issue has been raised by the appropriateness of certain accounting standards related to consolidations, that the FASB has been working on for some time, applicable to Enron’s restatement of its financial statements as reported to the SEC by Enron in its November 8, 2001, Form 8-K filing. However, the issue has previously been raised when the FASB has addressed other controversial accounting issues, such as accounting for stock options. Therefore, the Congress may wish to task the SEC with studying this issue and identifying alternative sources of funding to supplement the FASB’s sale of publications, including the possibility of imposing fees on registrants and/or firms, and to report to the Congress on its findings and actions taken to address the funding issue. Over the last decade, securities markets have experienced unprecedented growth and change. Moreover, technology has fundamentally changed the way markets operate and how investors access markets. These changes have made the markets more complex. In addition, the markets have become more international, and legislative changes have resulted in a regulatory framework that requires increased coordination among financial regulators and requires that the SEC regulate a greater range of products. Moreover, as I have discussed, the collapse of Enron and other corporate failures have stimulated an intense debate on the need for broad-based reform in such areas as oversight of the accounting profession, accounting standards, corporate governance, and analysts conflicts of interest issues, all of which could have significant repercussions on the SEC’s role and oversight challenges. At the same time, the SEC has been faced with an ever-increasing workload and ongoing human capital challenges, most notably high staff turnover and numerous staff vacancies. Our recent report discusses these issues and the need for the SEC to improve its strategic planning to more effectively manage its operations and limited resources, and also shows that the growth of SEC resources has not kept pace with the growth in the SEC’s workload (such as filings, complaints, inquiries, investigations, examinations, and inspections). We believe that the SEC should be provided with the necessary resources to effectively discharge its current and any increased responsibilities the Congress may give it. And finally, we believe that the SEC should be directed to report annually to the Congress on (1) its strategic plan for carrying out its mission, (2) the adequacy of its resources and how it is effectively managing resources through a risk-oriented approach and prioritization of risks, including effective use of information technology, and (3) any unmet needs including required funding and human resources. The United States has the largest and most respected capital markets in the world. Our capital markets have long enjoyed a reputation of integrity that promotes investor confidence. This is critical to our economy and the economies of other nations given the globalization of commerce. However, this long-standing reputation is now being challenged by some parties. The effectiveness of systems relating to independent audits, financial reporting, and corporate governance, which represent key underpinnings of capital markets and are critical to protecting the public’s interest, has been called into question by the failure of Enron and certain other events and practices. Although the human elements can override any system of controls, it is clear that there are a range of actions that are critical to the effective functioning of the system underlying capital markets that require attention by a range of key players. In addition, a strong enforcement function with appropriate civil and criminal sanctions is also needed to ensure effective accountability when key players fail to properly perform their duties and responsibilities.
In the wake of the Enron collapse and the proliferation of earnings restatements and pro forma earnings assertions by other companies, questions are being raised about the soundness of private sector financial reporting, auditor independence, and corporate governance. In addressing these issues, the government's role could range from direct intervention to encouraging non-governmental and private-sector entities to adopt practices that would strengthen public confidence. GAO believes that Congress should consider a holistic approach that takes into account the many players and interrelated issues that brought about the Enron situation.
Since December 5, 1989, DOE has not produced War Reserve pits for the nuclear stockpile. On that date, the production of pits at Rocky Flats, which was DOE’s only large-scale pit-manufacturing facility, was suspended because of environmental and regulatory concerns. At that time, it was envisioned that production operations would eventually resume at the plant, but this never occurred. In 1992, DOE closed its pit-manufacturing operations at Rocky Flats without establishing a replacement location. In 1995, DOE began work on its Stockpile Stewardship and Management Programmatic Environmental Impact Statement, which analyzed alternatives for future DOE nuclear weapons work, including the production of pits. In December 1996, Los Alamos was designated as the site for reestablishing the manufacturing of pits. DOE is now reestablishing its capability to produce War Reserve pits there so that pits removed from the existing stockpile for testing or other reasons can be replaced with new ones. Reestablishing the manufacturing of pits will be very challenging because DOE’s current efforts face new constraints that did not exist previously. For example, engineering and physics tests were used in the past for pits produced at Rocky Flats to ensure that those pits met the required specifications. Nuclear tests were used to ensure that those pits and other components would perform as required. While engineering and physics tests will still be utilized for Los Alamos’s pits, the safety and reliability of today’s nuclear stockpile, including newly manufactured pits, must be maintained without the benefit of underground nuclear testing. The United States declared a moratorium on such testing in 1992. President Clinton extended this moratorium in 1996 by signing the Comprehensive Test Ban Treaty, through which the United States forwent underground testing indefinitely. In addition, to meet regulatory and environmental standards that did not exist when pits were produced at Rocky Flats, new pit-production processes are being developed at Los Alamos. DOD is responsible for implementing the U.S. nuclear deterrent strategy, which includes establishing the military requirements associated with planning for the stockpile. The Nuclear Weapons Council is responsible for preparing the annual Nuclear Weapons Stockpile Memorandum, which specifies how many warheads of each type will be in the stockpile. Those weapons types expected to be retained in the stockpile for the foreseeable future are referred to as the enduring stockpile. DOE is responsible for managing the nation’s stockpile of nuclear weapons. Accordingly, DOE certifies the safety and reliability of the stockpile and determines the requirements for the number of weapons components, including pits, needed to support the stockpile. DOE has made important changes in the plans for its pit-manufacturing mission. Additionally, some specific goals associated with these plans are still evolving. In December 1996, DOE’s goals for the mission were to (1) reestablish the Department’s capability to produce War Reserve pits for one weapons system by fiscal year 2001 and to demonstrate the capability to produce all pit types for the enduring stockpile, (2) establish a manufacturing capacity of 10 pits per year by fiscal year 2001 and expand to a capacity of up to 50 pits per year by fiscal 2005, and (3) develop a contingency plan for the large-scale manufacturing of pits at some other DOE site or sites. In regard to the first goal, DOE and Los Alamos produced a pit prototype in early 1998 and believe they are on target to produce a War Reserve pit for one weapons system by fiscal year 2001. In regard to the second goal, DOE has made important changes. Most notably, DOE’s capacity plans have changed from a goal of 50 pits per year in fiscal year 2005 to 20 pits per year in fiscal 2007. What the final production capacity at Los Alamos will be is uncertain. Finally, DOE’s efforts to develop a contingency plan for large-scale production have been limited and when such a plan will be in place is not clear. To meet the first goal of reestablishing its capability to produce a War Reserve pit for a particular weapons system by fiscal year 2001, DOE has an ambitious schedule. This schedule is ambitious because several technical, human resource, and regulatory challenges must be overcome. Approximately 100 distinct steps or processes are utilized in fabricating a pit suitable for use in the stockpile. Some of the steps in manufacturing pits at Los Alamos will be new and were not used at Rocky Flats. Each of these manufacturing processes must be tested and approved to ensure that War Reserve quality requirements are achieved. The end result of achieving this first goal is the ability to produce pits that meet precise War Reserve specifications necessary for certification as acceptable for use in the stockpile. Skilled technicians must also be trained in the techniques associated with the pit-manufacturing processes. Currently, according to DOE and Los Alamos officials, several key areas remain understaffed. According to a Los Alamos official, the laboratory is actively seeking individuals to fill these positions; however, the number of qualified personnel who can perform this type of work and have the appropriate security clearances is limited. Finally, according to DOE and Los Alamos officials, the production of pits at Los Alamos will be taking place in a regulatory environment that is more stringent than that which existed previously at Rocky Flats. As a result, new processes are being developed, and different materials are being utilized so that the amount and types of waste can be reduced. Los Alamos achieved a major milestone related to its first goal when it produced a pit prototype on schedule in early 1998. DOE and Los Alamos officials believe they are on schedule to produce a War Reserve pit for one weapons system by fiscal year 2001. DOE plans to demonstrate the capability to produce pits for other weapons systems but does not plan to produce War Reserve pits for these systems until sometime after fiscal year 2007. Furthermore, DOE’s Record of Decision stated that Los Alamos would reestablish the capability to manufacture pits for all of the weapons found in the enduring stockpile. Currently, however, according to DOE officials, DOE does not plan to reestablish the capability to produce pits for one of the weapons in the enduring stockpile until such time as the need for this type of pit becomes apparent. Once Los Alamos demonstrates the capability to produce War Reserve pits, it plans on establishing a limited manufacturing capacity. Originally, in late 1996, DOE wanted to have a manufacturing capacity of 10 pits per year by fiscal year 2001 and planned to expand this capacity to 50 pits per year by fiscal 2005. In order to achieve a 10-pits-per-year manufacturing capacity by fiscal year 2001, DOE was going to supplement existing equipment and staff in the PF-4 building at Los Alamos. To achieve a capacity of 50 pits per year by fiscal year 2005, DOE planned a 3-year suspension of production in PF-4 starting in fiscal year 2002. During this time, PF-4 would be reconfigured to accommodate the larger capacity. Also, some activities would be permanently moved to other buildings at Los Alamos to make room for the 50-pits-per-year production capacity. For example, a number of activities from the PF-4 facility would be transferred to the Chemistry and Metallurgy Research building. Once PF-4 was upgraded, it would be brought back on-line with a production capacity of 50 pits per year. In December 1997, DOE’s new plan changed the Department’s goal for implementing the limited manufacturing capacity. DOE still plans to have a 10-pits-per-year capacity by fiscal year 2001. However, DOE now plans to increase the capacity to 20 pits per year by fiscal year 2007. If DOE decides to increase production to 50 pits per year, it would be achieved sometime after fiscal year 2007. As with the original plan, in order to achieve a 50-pits-per-year capacity, space for manufacturing pits in PF-4, which is now shared with other activities, would have to be completely dedicated to the manufacturing of pits. DOE officials gave us a number of reasons for these changes. First, because the original plan required a 3-year shutdown of production in PF-4, DOE was concerned that there would not be enough pits during the shutdown to support the stockpile requirement, considering that pits would have been destructively examined under the stockpile surveillance program.Under the new plan, annual production will continue except for 3-or 4-month work stoppages during some years to allow for facility improvements and maintenance. Second, DOE was concerned that pits produced after the originally planned 3-year shutdown might need to be recertified. Third, DOE wanted to decouple the construction activities at the Chemistry and Metallurgy Research building from planned construction at PF-4 because linking construction projects at these two facilities might adversely affect the pit-manufacturing mission’s schedule. DOE’s 1996 plan called for developing a contingency plan to establish a large-scale (150-500 pits per year) pit-manufacturing capacity within 5 years, if a major problem were found in the stockpile. DOE has done little to pursue this goal. It has performed only a preliminary evaluation of possible sites. DOE has not developed a detailed contingency plan, selected a site, or established a time frame by which a plan should be completed. According to DOE officials, they will not pursue contingency planning for large-scale manufacturing until fiscal year 2000 or later. The purpose for the contingency plan was to lay out a framework by which DOE could establish a production capacity of 150 to 500 pits per year within a 5-year time frame. Such a capacity would be necessary if a systemwide problem were identified with pits in the stockpile. This issue may become more important in the future, as existing nuclear weapons and their pits are retained in the stockpile beyond their originally planned lifetime. Research is being conducted on the specific effects of aging on plutonium in pits. A DOE study found that Los Alamos is not an option for large-scale pit manufacturing because of space limitations that exist at PF-4. As a result, large-scale operations would most likely be established at some other DOE nuclear site(s) where space is adequate and where some of the necessary nuclear infrastructure exists. DOE has not specified a date by which the plan will be completed, and, according to DOE officials, the contingency plan has not been a high priority within DOE for fiscal years 1998-99. According to DOE officials, they may fund approximately $100,000 for a study of manufacturing and assembly processes for large-scale manufacturing in fiscal year 1999. In addition, according to DOE officials, DOE has not pursued contingency planning for large-scale manufacturing more aggressively because the Department would like more work to be done at PF-4 prior to initiating this effort. In this regard, the officials stated that the development of a contingency plan requires more complete knowledge of the processes, tooling, and technical skills still being put in place at Los Alamos. This knowledge will serve as a template for large-scale manufacturing. DOE believes that this knowledge should be well defined by fiscal year 2000. According to information from DOE, the total cost for establishing and operating the pit-manufacturing mission under its new plan will be over $1.1 billion from fiscal year 1996 through fiscal 2007. This estimate includes funds for numerous mission elements needed to achieve DOE’s goals. This estimate does not include over $490 million in costs for other activities that are not directly attributable to pit production but are needed to support a wide variety of activities, including the pit-manufacturing mission. Some key controls related to the mission are either in the formative stages of development or do not cover the mission in its entirety. DOE provided us with data reflecting the total estimated costs of its new plans and schedules. These data were developed for the first time during our audit. DOE emphasized that these costs should be treated as draft estimates instead of approved numbers. On the basis of this information, the costs for establishing and operating the pit-manufacturing mission were estimated to total over $1.1 billion from fiscal year 1996 through fiscal 2007. Table 1 shows the total estimated costs related to the various elements of the mission. At the time of our review, DOE estimated that by the end of fiscal year 1998, it would have spent $69 million on the mission. Other activities are needed to support a wide variety of efforts, including the pit-manufacturing mission but are not directly attributable to pit production. These include construction-related activities at various Los Alamos nuclear facilities. For example, one activity is the construction upgrades at the Chemistry and Metallurgy Research building. DOE and Los Alamos officials stated that the costs of these activities would have been incurred whether or not Los Alamos was selected for the pit-manufacturing mission. However, unless these activities are carried out, DOE and Los Alamos officials believe that it will be difficult for them to achieve the mission’s goals. Table 2 shows the total estimated costs of these other supporting activities. The success of DOE’s pit-manufacturing mission at Los Alamos requires the use of effective cost and managerial controls for ensuring that the mission’s goals are achieved within cost and on time. An effective cost and managerial control system should have (1) an integrated cost and schedule control system, (2) independent cost estimates, and (3) periodic technical/management reviews. DOE and Los Alamos have taken actions to institute these cost and managerial controls related to the pit mission. However, some of these controls are either in the formative stages of development or are limited to addressing only certain elements of the mission instead of the entire mission. An integrated cost and schedule control system would allow managers to measure costs against stages of completion for the pit-manufacturing mission’s overall plan. For example, at any given time, the plan might identify a certain percentage of the mission’s resources that were to be spent within established limits. If variances from the plan were to exceed those limits, corrective actions could be taken. DOE and Los Alamos have in place, or are in the process of developing, (1) an integrated planning and scheduling system for the pit-manufacturing mission and (2) a separate financial management information system for monitoring costs. Los Alamos’s planning and scheduling system for the pit-manufacturing mission will eventually track, in an integrated fashion, all key planning and scheduling milestones. This system will enable managers to have timely and integrated information regarding the mission’s progress. Currently, individual managers are tracking their own progress toward important milestones but do not have integrated mission information. If their individual milestones slip, managers can take corrective actions. The integrated planning and scheduling system will enable managers to have information regarding the mission’s progress as a whole. According to a Los Alamos official, the planning and scheduling system will be completed in December 1998. Los Alamos’s financial management information system, through which mission-related costs can be monitored, provides managers with information that enables them to track expenditures and available funds. Eventually, this system will be interfaced with the pit-manufacturing mission’s integrated planning and scheduling system. However, according to a Los Alamos official, this may take several years. Independent cost estimates are important, according to DOE, because they serve as analytical tools to validate, cross-check, or analyze estimates developed by proponents of a project. DOE’s guidance states that accurate and timely cost estimates are integral to the effective and efficient management of DOE’s projects and programs. According to DOE and Los Alamos officials, independent cost estimates are required by DOE’s guidance for individual construction projects but are not required for other elements of the pit-manufacturing mission. DOE has two construction projects directly related to the pit mission and five others that indirectly support it. The Capability Maintenance and Improvements Project and the Transition Manufacturing and Safety Equipment project are directly related to the pit-manufacturing mission. The Nuclear Materials Storage Facility Renovation, the Chemistry and Metallurgy Research Building Upgrades Project, the Nuclear Materials Safeguards and Security Upgrades Project, the Nonnuclear Reconfiguration Project, and the Fire Water Loop Replacement Project indirectly support the mission as well as other activities at Los Alamos. DOE plans to eventually make an independent cost estimate for most of these construction projects. According to a DOE official, independent cost estimates have been completed for the Nuclear Materials Storage Facility Renovation, the Nonnuclear Reconfiguration Project, and the Fire Water Loop Project. Independent cost estimates have been performed for portions of the Chemistry and Metallurgy Research Building Upgrades Project. Additionally, a preliminary independent cost estimate was performed for the Capability Maintenance and Improvements Project prior to major changes in the project. DOE officials plan to complete independent cost estimates for the Nuclear Materials Safeguards and Security Upgrades Project, the revised Capability Maintenance and Improvements Project, and portions of the Transition Manufacturing and Safety Equipment project, depending upon their complexity. Because the bulk of mission-related costs are not construction costs, these other funds will not have the benefit of independent cost estimates. The mission’s elements associated with these funds include activities concerning War Reserve pit-manufacturing capability, pit-manufacturing operations, and certification. Moreover, according to DOE and Los Alamos officials, no independent cost estimate has been prepared for the mission as a whole, and none is planned. According to these officials, this effort is not planned because of the complexity of the mission and because it is difficult to identify an external party with the requisite knowledge to accomplish this task. It is important to note, however, that these types of studies have been done by DOE. In fact, DOE has developed its own independent cost-estimating capability, which is separate and distinct from DOE’s program offices, to perform such estimates. Technical/management reviews can be useful in identifying early problems that could result in cost overruns or delay the pit-manufacturing mission. DOE and Los Alamos have taken a number of actions to review particular cost and management issues. These include (1) a “Change Control Board” for the entire mission, (2) a technical advisory group on the management and technical issues related to the production of pits, (3) peer reviews by Lawrence Livermore National Laboratory on pit-certification issues, and (4) annual mission reviews. The Change Control Board consists of 14 DOE, Los Alamos, and Lawrence Livermore staff who worked on the development of the mission’s integrated plan. The Board was formed in March 1998 to act as a reviewing body for costs and management issues related to the mission. This group will meet quarterly or more regularly, as needed, to resolve cost or schedule problems. The group’s initial efforts have focused on addressing unresolved issues in the integrated plan. For example, the group has merged data from Lawrence Livermore National Laboratory and Los Alamos into the integrated plan and is updating a key document associated with the mission’s master schedule. Since July 1997, Los Alamos has been using a technical advisory group composed of nuclear experts external to Los Alamos and DOE. This group, paid by Los Alamos, provides independent advice and consultation on management and technical issues related to pit manufacturing and other related construction projects. The specific issues for assessment are selected either by the group or upon the request of Los Alamos’s management. According to the group’s chairman, Los Alamos has historically had problems with project management, and the group’s work has focused on efforts to strengthen this aspect of the pit-manufacturing mission. For example, the group has identified the need for and provided advice on the development of key planning documents. This group meets at Los Alamos on a monthly basis. Los Alamos plans specific peer reviews by Lawrence Livermore to independently assess the processes and tests related to the certification of pits. Los Alamos’s use of these peer reviews is an effort to provide an independent reviewing authority because Los Alamos is responsible for both manufacturing the pits and approving their certification. An initial planning session for this effort is scheduled for the fall of 1998. DOE and Los Alamos officials conducted a review of the pit-manufacturing mission in September 1997. The purpose of this review was to brief DOE management on the progress and status of various elements associated with the mission. As a result of the 1997 review, DOE and Los Alamos began developing an integrated plan that brings together the various elements of the mission. According to Los Alamos officials, such reviews will be held annually. DOD is responsible for implementing the U.S. nuclear deterrent strategy. According to officials from various DOD organizations, DOE’s pit-manufacturing mission is critical in supporting DOD’s needs. As a result, representatives from both Departments have conferred on and continue to discuss plans for the mission. Two important issues remain unresolved. First, officials from various DOD organizations have concerns about changes in the manufacturing processes that will be used to produce pits at Los Alamos. Second, on the basis of preliminary analyses by various DOD organizations, some representatives of these organizations are not satisfied that DOE’s planned capacity will meet the anticipated stockpile needs. DOE is responsible for ensuring that the stockpile is safe and reliable. The safety and reliability of the pits produced at Rocky Flats were proven through nuclear test detonations. Officials from various DOD organizations are concerned that Los Alamos’s pits will be fabricated by some processes that are different from those employed previously at Rocky Flats. Furthermore, pits made with these new processes will not have the benefit of being tested in a nuclear detonation to ensure that they perform as desired. As a result, officials from various DOD organizations want assurance that Los Alamos’s pits are equivalent to those produced at Rocky Flats in all engineering and physics specifications. To accomplish this, DOE and Los Alamos plan to have Lawrence Livermore conduct peer reviews. These peer reviews will focus on the certification activities related to the first type of pit to be produced. This will help verify that the necessary standards have been met. According to representatives from both Departments, they will continue to actively consult on these issues. The other unresolved issue between DOD and DOE is DOE’s planned pit-manufacturing capacity. Several efforts are currently under way within various DOD organizations to determine the stockpile’s needs and the associated requirements for pits. DOD has not established a date for providing DOE with this information. Nevertheless, on the basis of the preliminary analyses performed by various DOD organizations, many DOD officials believe that DOE’s capacity plans will not meet their stockpile needs. According to these officials, their requirements will be higher than the production capacity planned at Los Alamos. As a result, these officials do not support DOE’s stated goal of developing a contingency plan for a large-scale manufacturing capacity sometime in the future. Rather, these officials told us that they want DOE to establish a large-scale manufacturing capacity as part of its current efforts. However, DOD officials said that they will be unable to give detailed pit-manufacturing requirements until the lifetime of pits is specified more clearly through DOE’s ongoing research on how long a pit can be expected to function after its initial manufacture. According to DOE officials, they believe that the planned capacity is sufficient to support the current needs of the nuclear weapons stockpile. Furthermore, no requirement has been established for a larger manufacturing capacity beyond that which is planned for Los Alamos. DOE officials told us that they are discussing capacity issues with DOD and are seeking to have joint agreement on the required capacity. However, no date has been established for reaching an agreement on this issue. DOE plans to spend over $1.1 billion through fiscal year 2007 to establish a 20-pits-per-year capacity. This capacity may be expanded to 50 pits per year sometime after fiscal year 2007. Various DOD organizations have performed preliminary analyses of the capacity needed to support the stockpile. These analyses indicate that neither the 20-pits-per-year capacity nor the 50-pits-per-year capacity will be sufficient to meet the needs of the stockpile. As a result, officials from organizations within DOD oppose DOE’s plan for not developing a large-scale manufacturing capacity now but rather planning for it as a future contingency. Once the various DOD organizations have completed their stockpile capacity analyses, DOD can then let DOE know its position on the needs of the nuclear stockpile. DOE will then be faced with the challenge of deciding how it should respond. A decision to pursue a production capacity larger than that planned by DOE at Los Alamos will be a major undertaking. Because of the cost and critical nature of the pit-manufacturing mission, DOE needs to ensure that effective cost and managerial controls are in place and operating. DOE and Los Alamos have not fully developed some of the cost and managerial control measures that could help keep them within budget and on schedule. An integrated cost and schedule control system is not in place even though millions of dollars have been spent on the mission. Furthermore, only a small portion of the costs associated with the mission has had the benefit of independent cost estimates. Without fully developed effective cost and managerial controls, the mission could be prone to cost overruns and delays. In order for DOE to have the necessary information for making pit-production capacity decisions, we recommend that the Secretary of Defense do the following: Provide DOE with DOD’s views on the pit-manufacturing capacity needed to maintain the stockpile. This should be done so that DOE can use this information as part of its reevaluation of the stockpile’s long-term capacity needs. While we understand that DOD cannot yet provide detailed requirements, DOE can be provided with the findings of the preliminary analyses of various DOD organizations. In order to ensure that the pit-manufacturing mission at Los Alamos supports the nuclear stockpile in a cost-effective and timely manner, we recommend that the Secretary of Energy take the following measures: Reevaluate existing plans for the pit-manufacturing mission in light of the issues raised by DOD officials regarding the capacity planned by DOE. Expedite the development of the integrated cost and schedule control system at Los Alamos. This needs to be done as soon as possible to help ensure that the mission is achieved within cost and on time. Conduct independent cost estimates for the entire pit-manufacturing mission. This can be done either for the mission as a whole or for those individual mission elements that have not had independent estimates. We provided DOE and DOD with a draft of this report for review and comment. DOE concurred with all but one recommendation in the report. That recommendation was that the Secretary of Energy “establish a separate line item budget category for the pit-manufacturing mission at Los Alamos.” In its comments, DOE emphasized that its current budgeting and accounting practices related to pit production are consistent with appropriation guidelines, are consistent with budgeting and accounting standards, and are responsive to the Government Performance and Results Act. DOE also stated that it plans to keep congressional staff informed of the mission’s progress through quarterly updates. These updates will be initiated following the approval of the budget for fiscal year 1999. In a subsequent discussion, DOE’s Laboratory Team Leader in the Office of Site Operation, said that these updates will include information on the mission’s cost and milestones. He noted that the cost information provided could be as detailed as congressional staff require. Our recommendation was aimed at getting DOE to identify the total estimated costs associated with the pit-manufacturing mission in a clear and comprehensive manner to the Congress. The clear identification of total estimated costs is important because the pit-manufacturing mission is critical to national security interests and represents a significant financial investment for the future. Since DOE prepared a cost estimate covering the total pit mission during our audit, a baseline has been established. We believe that DOE’s planned quarterly updates will be an appropriate means of updating this cost information for the Congress. As a result, we have deleted this recommendation from our final report. DOE also provided several clarifications to the report, and the report has been revised where appropriate. DOE’s comments are provided in appendix II. DOD agreed with the information presented in our draft report and provided us with technical clarifications, which we incorporated as appropriate. DOD did not agree with our recommendation that the Secretary of Defense clearly articulate DOD’s views on the pit-manufacturing capacity needed to maintain the stockpile. DOD was concerned that the aging of pits was not clearly identified in our report as a driving force of pit-production requirements. DOD said that it could not give detailed pit-manufacturing requirements until the lifetime of pits is specified more clearly by DOE. We have modified our report and the recommendation to recognize that DOD believes that it cannot provide DOE with detailed pit-manufacturing capacity requirements until more is known about the aging of pits. However, we believe that there are merits in DOD’s sharing of the information from the preliminary analyses of various DOD organizations with DOE. This information would be useful for DOE in its long-term planning efforts, especially those related to contingency planning. DOD’s comments are included in appendix III. To address our objectives, we interviewed officials and obtained documents from DOD, DOE, Los Alamos, and the Nuclear Weapons Council. We did not independently verify the reliability of the estimated cost data that DOE provided us with. According to DOE, these data represent its best estimates of future mission costs but are likely to change as the mission progresses and should not be viewed as final. Our scope and methodology are discussed in detail in appendix I. We performed our review from October 1997 through August 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Secretary of Energy; the Secretary of Defense; and the Director, Office of Management and Budget; and appropriate congressional committees. We will also make copies available to others on request. To obtain information about the Department of Energy’s (DOE) plans and schedules for reestablishing the manufacturing of pits, we gathered and analyzed various documents, including DOE’s (1) Record of Decision for the Stockpile Stewardship and Management Programmatic Environmental Impact Statement, (2) guidance for stockpile management and the pit-manufacturing mission, and (3) the draft Integrated Plan for pit manufacturing and certification. We discussed with DOE and Los Alamos National Laboratory officials the basis for the mission’s plans and schedules. These officials also discussed why changes were made to these plans and schedules in December 1997. DOE and Los Alamos officials discussed with us their progress in meeting milestones, which we compared with the established major milestones for the mission. In order to have a better understanding of the efforts taking place at Los Alamos, we also met with DOE and contractor employees at Rocky Flats who were formerly involved with the production of pits at that site. These individuals discussed the pit production issues and challenges that they faced at Rocky Flats. Cost information associated with the pit-manufacturing mission was obtained primarily from DOE’s Albuquerque Operations Office. This information was compiled by DOE with the assistance of Los Alamos officials. These costs were only recently prepared by DOE and Los Alamos. According to a DOE official, this effort took several months partly because of changes in DOE’s mission plans. These costs were provided for us in current-year dollars. As such, we did not adjust them to constant-year dollars. Additionally, we did not independently verify the accuracy of the cost data. These data were in draft form during our review and not considered approved by DOE. We interviewed both DOE and Los Alamos officials regarding the methodology that was used to develop the cost data. In addition, we also discussed with DOE and Los Alamos officials cost and managerial controls related to the mission and reviewed pertinent documents on this subject. To understand unresolved issues between the Department of Defense (DOD) and DOE regarding the manufacturing of pits, we spoke with representatives from DOD, DOE, and Los Alamos. DOD officials with whom we spoke included representatives from the Joint Chiefs of Staff, Nuclear and Chemical and Biological Defense Programs, Army, Air Force, Navy, and Strategic Command. We also met with a representative of the Nuclear Weapons Council. Our work was conducted in Golden, Colorado; Germantown, Maryland; Albuquerque, New Mexico; Los Alamos, New Mexico; Alexandria, Virginia; and Washington, D.C., from October 1997 through August 1998 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) efforts to manufacture war reserve nuclear weapon triggers, or pits, at its Los Alamos National Laboratory, focusing on: (1) DOE's plans and schedules for reestablishing the manufacturing of pits at Los Alamos; (2) the costs associated with these efforts; and (3) unresolved issues regarding the manufacturing of pits between the Department of Defense (DOD) and DOE. GAO noted that: (1) DOE's plans for reestablishing the production of pits at Los Alamos National Laboratory have changed and are still evolving; (2) DOE expects to have only a limited capacity online by fiscal year (FY) 2007; (3) specifically, DOE plans to reestablish its capability to produce war reserve pits for one weapons system by FY 2001 and plans to have an interim capacity of 20 pits per year online by FY 2007; (4) this planned capacity differs from the goal that DOE established in FY 1996 to produce up to 50 pits per year by fiscal 2005; (5) DOE has not decided what the final production capacity at Los Alamos will be; (6) DOE has done little to develop a contingency plan for the large-scale manufacturing of pits (150-500 pits per year); (7) large-scale manufacturing would be necessary if a systemwide problem were identified with pits in the stockpile; (8) the current estimated costs for establishing and operating DOE's pit-manufacturing mission total over $1.1 billion from FY 1996 through fiscal 2007; (9) this estimate does not include over $490 million in costs for other activities that are not directly attributable to the mission but are needed to support a wide variety of defense-related activities; (10) GAO also noted that some key cost and managerial controls related to DOE's pit-manufacturing mission are either in the formative stages of development or do not cover the mission in its entirety; (11) DOD and DOE have discussed, but not resolved, important issues regarding: (a) changes in the manufacturing processes that will be used to produce pits at Los Alamos; and (b) the pit-manufacturing capacity planned by DOE; (12) officials from various DOD organizations have expressed concerns about the equivalence of Los Alamo's pits to the pits previously manufactured at Rocky Flats because some manufacturing processes will be new at Los Alamos and are different from those previously used by Rocky Flats; (13) also, officials from various DOD organizations are not satisfied that DOE's current or future capacity plans will be sufficient to meet the stockpile's needs; (14) various DOD organizations have performed preliminary analyses of the capacity needed to support the stockpile; (15) on the basis of these analyses, some of these officials believe that the stockpile's needs exceed the 20-pits-per-year capacity that DOE may establish in the future; (16) however, DOD officials said that they will be unable to give detailed pit-manufacturing requirements until the lifetime of pits is more clearly specified by DOE; and (17) DOE is currently studying this issue.
With over 54,000 engines to support its 17,400 aircraft, the Department of Defense (DOD) is the world’s largest owner of aircraft and aircraft engines. During fiscal years 1992 and 1993, the total cost for maintaining these engines was about $1.1 billion of the $13 billion depot maintenance program. Depot repair of engines and engine components requires more funding than any other commodity that is not an end-item weapon system, such as an aircraft or ship. Engine overhaul costs represent about 8.5 percent of the total depot maintenance budget. Military engines are maintained and overhauled in an extensive network of military service depots, private sector engine manufacturers, and private sector repair activities, such as airlines and independent repair service companies. Of the 51 types of military engines used today, 28 are generally repaired in military depots and 23 almost exclusively by contractors. The engines maintained by the private sector generally have commercial as well as military applications. Generally, commercial counterpart engines repaired by the private sector support fewer aircraft and require less inventory than engines that are maintained in military depots. In recent years, private sector firms have sought more of the military engine workload. At the same time, excess capacity has also been increasing in military depots, as both numbers of military aircraft and engines as well as engine overhaul requirements have declined. As a part of this review, we analyzed DOD’s approach to allocating engine depot repair between the public and private sectors. Engine maintenance has been the subject of recent congressional interest. Additionally, engines are DOD’s largest and most costly commodity group. Further, one category of engines—those with commercial counterparts—are either identical or very similar to engines used in the private sector. These characteristics enhance their potential cost-effectiveness as candidates for privatization. As a part of our analysis, we reviewed a March 1995 DOD report to Congress on the maintenance of military turbine engines with civilian engine counterparts. Depot maintenance involves repairing, overhauling, modifying, and upgrading defense systems and equipment. Depot maintenance also includes limited manufacture of parts, technical support, modifications, testing, and reclamation as well as software maintenance. DOD estimates that its depot repair facilities and equipment are valued at over $50 billion. Thousands of private sector firms also do depot-level repair. Appendix I provides a brief overview of the engine depot repair process, using a flow diagram and pictures. Depot-level maintenance is the third of the three maintenance levels used by the military services. Depot maintenance activities have historically had more extensive technical capability than the lower levels—in terms of the facilities, equipment, and trained personnel. However, various programs initiated in recent years by the military have resulted in blending some maintenance activities among the various levels. For example, the Air Force implemented a two-level maintenance concept that significantly reduced the second level of maintenance at the operational unit for some systems, including engines. Under this concept, faulty engine components are shipped from the unit to Air Force depots, including the two engine repair depots. The work done in the two-level shops is considered depot-level repair and is performed by a combination of military, civilian, and contractor personnel. DOD has depot-level capability to repair 28 different types of large turbine engines. Most of these engines are used to power DOD’s fleet of fixed- and rotary-wing aircraft. Three exceptions are the General Electric LM2500 ship engine, the Lycoming AGT1500 M-1 tank engine, and the Allison 501K, which is used for electrical power generators on ships. DOD also organically repairs many smaller gas turbine engines that provide auxiliary power to aircraft and ground support equipment. DOD contracts for most of the repair of 23 other engines, which power such aircraft as the KC-10, T-38, and C-9. Most of the 28 engines maintained in DOD’s public depots are military-unique and not used in the commercial market place. Military-unique engines include the F100 engine, which powers the F-15 and F-16 aircraft, and the F404 engine, which powers the F/A-18 and F-117A aircraft. However, 10 of the 28 engines maintained in DOD depots are comparable to engines used in the private sector. In addition, the Air Force is considering developing repair capability for the F117 engine, which powers the C-17 aircraft and is currently supported by the manufacturer. It is similar to the commercial engine that powers the Boeing 757 aircraft. Table 1.1 shows the 11 military engines with commercial counterparts for which DOD has or is considering developing depot maintenance capability. In most cases where it repairs a military engine with a commercial counterpart, DOD owns a significant portion of the engines in existence. For example, DOD has 25 percent of the F108/CFM56 engines, 54 percent of the T56, 62 percent of the TF33/JT3D, 78 percent of the TF34/CF34, and 95 percent of the T53. DOD depot maintenance workload requirements, including engines, have decreased from about 202 million direct labor hours in fiscal year 1987 to about 100 million direct labor hours projected for fiscal year 1996. Since geopolitical tensions eased in the late 1980s, changes in military strategy, reductions-in-force structure, and improved engine reliability have all contributed to decreased demand for engine repair requirements. The change in war-planning scenarios from a massive, protracted war in response to a Soviet invasion to shorter duration contingency scenarios also reduced the anticipated surge requirement for depot maintenance. Similarly, reductions in aircraft inventory have also reduced maintenance requirements. Between fiscal years 1985 and 1994, the services reduced their aircraft inventories from about 24,500 to 17,400. For example, the Air Force reduced its F-4 aircraft inventory from 1,597 to 61. Depot overhauls of the J79 engine, which supports the F-4 aircraft, also declined from over 500,000 direct labor hours in fiscal year 1986 to an estimated 0 for fiscal year 1997. Further reductions in aircraft inventories and associated engine repair requirements are expected as the services continue to phase out older weapon systems. In addition, improvements in technology have increased the reliability of turbine engines, reduced the number of depot-level overhauls, and reduced depot-level maintenance requirements. For example, three different engines have powered the KC-135 tanker aircraft. The first KC-135s were fitted with the J57 engine, which was later replaced with the TF33 engine. The Air Force is now replacing most of these engines with the F108. The F108 engine, with an unscheduled removal rate per 1,000 flying hours of 0.10, has 91 percent fewer unscheduled engine removals than the J57, which has an unscheduled engine removal rate of 1.16, and 79 percent fewer than the TF33, which has an unscheduled removal rate of 0.48. Similar engine reliability improvements have been achieved through modifications of other engines. For example, various upgrades over a 20-year period have increased the periods of time between scheduled overhauls for the F100 from 2 to 8 years. In response to declining requirements and criticisms for maintaining duplicate sources of repair, the military services have decreased the number of depots with depot engine repair capability. For example, the number of depots repairing turbine engines decreased from eight to six between 1990 and 1994. Additionally, DOD consolidated repair activities for most engine types at only one depot. As shown in table 1.2, 11 engine types were maintained at two or more depots in 1990. With only one exception, DOD now has only one organic depot-level repair site for each military engine. However, some engines are repaired both by a military depot and one or more private sector contractors. These workload consolidations began in 1990 as part of the DOD management review process and subsequent Base Closure and Realignment Commission (BRAC) decisions to close aviation depots. Specifically, Defense Management Report Decision 908 initially called for $3.9 billion in depot cost reductions over a 5-year period, but the target savings were later increased to $6.4 billion over a 7-year period. Efforts to achieve savings included consolidation, interservicing, and competitions between government depots and the private sector. Some of these efforts were superseded by the 1993 BRAC decision to close Alameda Naval Aviation Depot. For example, a single site for handling the T56 engine core workload was to be decided by a public-public competition between Alameda Naval Aviation Depot and San Antonio Air Logistics Center. Following the BRAC decision to close Alameda, the Navy transferred its T56/501K workload to the San Antonio Air Logistics Center. Despite these initiatives, DOD’s engine depot repair facilities continue to have significant excess capacity. During the 1995 BRAC process, DOD’s Joint Cross Service Group for Depot Maintenance noted that engines were among the five commodities with the greatest amount of excess capacity. We found this excess capacity to be about 5 million direct labor hours—about 45 percent of the total engine capacity. The Fiscal Year 1995 Department of Defense Appropriations Conference Report 103-747 required DOD to submit a detailed proposal for expanding competition for depot maintenance of jet engines with civilian counterparts to the House and Senate Committees on Appropriations. The report noted that DOD could save a lot by expanding competition for depot maintenance of equipment common to the military and industry, specifically, commercially developed aircraft turbine (jet) engines. On March 14, 1995, DOD provided the House and Senate Committees on Appropriations with its report. In its report, DOD concluded that the principal reason for maintaining depot maintenance capability is to support the readiness and sustainability in the Joint Chiefs of Staff major regional conflict scenarios. The report also stated that DOD’s approach for achieving this objective is to retain a certain level of capability in military depots—capability that DOD refers to as “core.” DOD also concluded that once core capabilities are established, it is essential, from an economic perspective, to use them during peacetime. In its engine report, DOD reviewed 17 military engines with commercial counterparts—10 maintained in the private sector and 7 in military depots. The report concluded that, for two reasons, no changes in workload allocation between the public and private sector were warranted. First, the repair assignments were consistent with DOD’s core requirements and sound business practices. Second, they supported the title 10 U.S.C. requirement that not more than 40 percent of depot maintenance work dollars be performed by other than federal government employees. Because of significant congressional interest in privatization of depot maintenance workloads, and engine workloads in particular, we addressed the following: (1) the rationale supporting the continued need for DOD to maintain capability to repair engines at its own depots, (2) whether there are opportunities to privatize additional engine workloads, and (3) the impact excess capacity within DOD’s depot system has on the cost-effectiveness of decisions to privatize additional workloads. We drew from information gathered as a part of our overall review of DOD’s depot-level maintenance program, including our commodity study of depot maintenance aircraft engine workload and capacity. As a part of this effort, we reviewed (1) historical workload data for each depot that performs engine overhauls and repairs engine components; (2) the services’ fiscal year 1997 engine workload projections for each depot in our study; and (3) capacity, core workload, and workload projections for fiscal years 1996 through 1999 used by the services to develop recommendations for the BRAC Commission. We interviewed officials and examined documents at the Office of the Secretary of Defense and Army, Air Force, and Navy headquarters, Washington, D.C.; Naval Aviation Depot Operations Center, Naval Air Station, Patuxent River, Maryland; Air Force Materiel Command, Dayton, Ohio; and Joint Depot Maintenance Analysis Group, Gentile Station, Dayton, Ohio. We interviewed service officials, examined documents and visited the facilities at the San Antonio Air Logistics Center, Kelly Air Force Base, and Corpus Christi Army Depot, Corpus Christi Naval Air Station, Texas; Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; Naval Aviation Depot, Cherry Point, North Carolina; Naval Aviation Depot, Jacksonville, Florida; and Naval Aviation Depot, North Island, California. To determine capacity at each depot, we obtained floor plans identifying work positions for each maintenance shop performing aircraft engine or engine component work. We visited each of the shops and reviewed the floor plans with industrial engineers and shop supervisors to validate the work position counts. Then we determined capacity using the computation method defined in DOD’s Depot Maintenance Capacity and Utilization Measurement Handbook (DOD 4151.15-H), which expresses capacity in direct labor hours. This method calculates a product by multiplying work position counts by an availability factor (95 percent) and by annual productive hours (1,615), assuming a 1-shift, 40-hour workweek. We did not include Naval Aviation Depot, North Island, capacity data in our analysis because at the time of our visit, the engine repair shops were being relocated and work position counts could not be accurately determined. To determine excess capacity, we compared fiscal year 1997 projected workload requirements against our capacity calculations. To identify private sector interest, capability, and capacity to accomplish depot overhaul and repair on military engines with commercial counterparts, we surveyed 24 private repair activities identified as potential sources of repair by DOD and original equipment manufacturer officials. These repair activities included 2 engine manufacturers, 5 airlines, and 17 independent repair activities. The private repair activities reported their reserve capacity to repair military engines in terms of the number of whole engines they could overhaul annually. To compare the reserve capacity reported by the private sector to projected military engine workload, we converted the number of engines reported by the private sector to direct labor hours using the depot labor standard or the average number of direct labor hours used to overhaul each engine at the depot. We used the military services’ workload projection for engine and component repair. While our methodology has limitations, it provides a rough order of magnitude of the capacity in the private sector relative to the services’ projected workload for military engines with commercial counterparts. We conducted our overall review of DOD’s depot maintenance program, including our evaluation of the engine repair program, from January 1994 to October 1995 in accordance with generally accepted government auditing standards. According to DOD, decisions to select a public or private activity to perform depot work must consider readiness and cost risks, as well as statutory requirements. Statutes require DOD to maintain a minimum level of capability as well as limit the amount of work that can be contracted out to the private sector. Public and private depot repair capabilities, capacity, and competition are key factors that impact readiness and cost, and, therefore, influence source-of-repair decisions. The amount of similarity between the military and commercial engines usually influences private sector capabilities and capacity. The amount of excess capacity in DOD’s depot system influences cost. Several statutes limit the amount of depot maintenance that can be contracted out to the private sector. In addition, they also require competition between the public and private sectors before contracting out work valued at over $3 million. Title 10 U.S.C. 2464 provides that DOD activities should maintain a logistics capability sufficient to ensure technical competence and resources necessary for an effective and timely response to a mobilization or other national defense emergency. It also requires that the Secretary of Defense identify specific logistics activities necessary to maintain the core capability described by that provision. However, 10 U.S.C. 2464 also provides that core logistics activities may be contracted out using the procedures of Office of Management and Budget Circular A-76 if certain requirements are met. For depot maintenance, DOD has defined core as the capability maintained within organic defense depots to meet readiness and sustainability requirements of the weapon systems that support the Joint Chiefs of Staff contingency scenarios. Core exists to minimize operational risks and to guarantee required readiness for these weapon systems. Core depot maintenance capabilities will comprise only the minimum facilities, equipment, and skilled personnel necessary to ensure a ready and controlled source of required technical competence. Depot maintenance for the designated weapon systems will be the primary workloads assigned to DOD depots to support core depot maintenance capabilities. Under the core concept, military requirements are driven by contingency scenarios developed by the Joint Chiefs of Staff. The services must identify what weapon systems and equipment are necessary to meet these requirements as well as the level of depot maintenance that is required to support these systems. Where the services are certain that they must maintain control of depot support to minimize risk to combat commanders, capabilities are established and retained in organic maintenance depots. In November 1993, the Deputy Under Secretary of Defense issued a policy memorandum that directed the services to quantify and report their depot maintenance core requirements by January 1994. The Secretary provided the services a methodology to follow in computing their core requirements. In defining core, DOD policy emphasized that core depot maintenance capability comprises only the minimum level of capability needed to support mission-essential weapon systems. Since core is the capability to support rather than the maintenance of specific weapon systems, this requirement does not apply to workload for specific systems. Thus, depot maintenance for some core engines could be privatized since the capability to repair the engines is similar to the same capability used to repair other core engines in the public depot. In addition, the policy memorandum stated that it is not core policy that all mission-essential hardware be maintained in a DOD depot. Private industry may maintain mission-essential weapon systems, if a service is satisfied that reliable sources of repair exist in the private sector to negate risk to the weapon system. For example, even though the KC-10 aircraft is a high priority mission-essential system required early in major regional conflicts, DOD contracted out the maintenance for the life of the aircraft. The KC-10 has a high degree of similarity with its commercial counterpart, the DC-10, which DOD believes mitigates the risk of contracting out the aircraft’s maintenance. We asked depot officials to specify how much of their workload for military engines with commercial counterparts they considered to be core. Their responses, which are presented in table 2.1, indicate that most of the 1997 workload requirements for commercial derivative engines is defined as core. It is not clear to what extent this core workload should be conducted in military depots. The recently published Report of the Commission on Roles and Missions of the Armed Forces challenged the validity of the core concept.According to the report, the services set core requirements that are actually greater than they need and this practice artificially supports the depots’ current capacity. The report recommended a time-phased plan to privatize essentially all existing depot-level maintenance. In his August 24, 1995, comments to the Senate Armed Services Committee regarding the report of the Commission on Roles and Missions, the Secretary of Defense stated that DOD agrees with the Commission’s recommendation to outsource a significant portion of DOD’s depot maintenance work, including outsourcing depot maintenance activities for new systems. At the same time, he said DOD believes it must retain a limited organic core depot maintenance capability to meet essential wartime surge demands, promote competition, and sustain institutional expertise. The military services are currently reviewing their core requirements. As early as 1974, Congress established legislative requirements regarding the allocation of depot workload between the public and private sectors. The Defense Appropriations Act of 1974 provided that, of the total amount of the appropriation made available for the alteration, overhaul, and repair of naval vessels, not less than $851,672,000 million should be conducted in naval shipyards and not less than $359,919,000 million in private shipyards. In addition, prior to 1982, DOD Directive 4151.1, “Use of Contractor and DOD Resources for Maintenance of Materiel,” instructed the services that they should limit their depots to do a maximum of 70 percent of their maintenance workload in order to maintain a private sector industrial base. Revisions to this directive in 1982 continued this requirement. It also stated that, to the extent possible, a competitive industrial base for depot maintenance should be established. More specifically, it provided that contractor support should be considered when it would (1) improve the industrial base, (2) improve peacetime readiness and combat sustainability, (3) be cost-effective, or (4) promote contract incentives for reliability and maintainability. This directive was superseded by a 1992 amendment to 10 U.S.C. 2466 that prohibited the military departments from contracting out more than 40 percent of their depot-level maintenance workload funds to the private sector. In January 1995, DOD reported that about 28 percent of its maintenance expenditures goes to private contractors and 72 percent goes to in-house work. However, we reported in 1994 that the private sector’s share is actually much larger—over half of DOD’s depot maintenance expenditures go to the private sector when the costs of repair parts or various technical or repair services the depots purchase from the private sector are included. Although current statutes limit the amount of overall depot workload dollars that can be used to contract with the private sector, neither the statute nor DOD regulations specify how the aircraft engine workload should be allocated. DOD recently reported that it paid about 38 percent, or $164 million, of the $435 million spent on maintaining commercial counterpart engines to the private sector. The remaining $271 million spent on maintaining these engines in the public depots is less than 2 percent of the total depot maintenance budget. Therefore, increasing the private sector’s share of DOD’s expenditures for repair of this commodity is not likely to significantly impact the overall limitation on commercial repair. Title 10 U.S.C. 2469 provides that depot-level maintenance or repair work with a value of at least $3 million is not to be changed to performance by a contractor unless the change is made using competitive procedures among private and public sector entities. This provision, which focuses on the transfer of individual units of work, is designed to ensure that workload transfers are cost-effective. DOD officials gave differing views regarding the applicability of this statute to workloads at depots closing from BRAC decisions. Although DOD officials stated that they hoped Congress would repeal the provision during the fiscal year 1996 authorization cycle, this did not happen. Public-private competition is one procedure the services have used to consider the cost-effectiveness of privatizing specific depot maintenance work. It was first used by the Navy in 1985 for its ship repair program. After demonstrating that it helped cut costs, the program spread to naval aviation and then the Army, the Air Force, and the Marine Corps. Although the competition program is credited with significant savings, private contractors generally do not believe the program is fair. They cite as support the fact that Air Force depots won a high percentage of its competed workloads. Noting the Air Force’s success, private sector companies—particularly original equipment manufacturers—believed the Air Force depots were not including all of their costs. Private sector firms urged DOD to eliminate public-private competition since they believed the program was inherently unfair. Nonetheless, the services reported substantial savings from the competitions as depots were forced to reengineer work processes and streamline maintenance organizations. Having traditionally focused on readiness and customer responsiveness, military depots were forced to focus on cost and competitiveness issues. DOD published a cost comparability handbook and undertook various initiatives designed to make the competition program fair. Despite the services’ claimed savings, we and DOD audit agencies found that DOD could not verify the results because of weaknesses in its accounting system and internal controls. The future of competition between public and private entities is questionable and remains uncertain. In April 1994, a government-industry task force on depot maintenance recommended to DOD that the public-private competition program be eliminated. It reported that the inadequacy of DOD’s financial management systems to accumulate actual costs for specific workloads in the depots precluded DOD from creating a level basis for public and private competition. A month later, DOD canceled the public-private competition program, directing the services to look primarily to the private sector as a source for major weapon systems modifications and upgrades. In its report on the fiscal year 1995 DOD appropriations bill, the conference committee disagreed with DOD’s announced policy and directed DOD to reinstate public-private competition. The Fiscal Year 1995 DOD Appropriations Conference Report 103-747 required that DOD report back to the committees on this subject by January 15, 1995. In its report to the House and Senate Appropriations Committees, DOD stated that its financial systems and databases are not capable of supporting the determination of actual cost of specific workloads. The DOD report also noted that while the Department is developing policies, procedures, and automated systems that will permit actual cost accounting for specific workloads accomplished in organic depots, substantial changes are required that will be time-consuming to complete and implement. In reviewing DOD’s public-private competition program, we found that many of the criticisms of the program involved internal control weaknesses that can be addressed at the local level. Some improvements had already been undertaken when the competition program was terminated, although the momentum for change was lost when the competition program was canceled. Further, some recent initiatives have demonstrated the potential for implementing required improvements. Recognizing that privatization of depot maintenance workloads only makes sense when it is cost-effective, and that current law precludes privatization without a competitive procedure, we have recommended that the Secretary of Defense (1) reinstitute public-private competition for depot maintenance workloads as quickly as possible; (2) develop and issue guidelines regarding the conditions, framework, policies, procedures, and milestones for reinstituting public-private competition; and (3) require the Defense Contract Audit Agency to review internal controls and accounting policies and procedures of DOD depots to ensure they are adequate for identifying, allocating, and tracking costs of depot maintenance programs and to ensure proper costs are identified and considered as part of the bids by DOD depots. The more similarity there is between military systems and equipment and commercially available items, the greater the likelihood that private repair sources may be cost-effective as depot maintenance sources of repair. Factors that influence the degree of similarity between engines are the commonality of engineering designs, interchangeability of parts, and likeness of repair processes. Similarity affects the availability of spare and repair parts as well as repair facilities, equipment, and trained personnel. The degree of similarity between military and commercial engines can range from 30 percent to 100 percent. Ten military engines with commercial counterparts are now fully or predominantly maintained in the private sector because they are very similar to their commercial counterparts and because depot overhaul and maintenance in the private sector was determined to be the most cost-effective option. According to DOD officials, the time to make such decisions is before the military invests in establishing its own depot maintenance capability. Our limited review indicates that organic repair of military-unique engines is generally more cost-effective than noncompetitive awards to the private sector. In the cases we reviewed, we found that repair sources for military-unique engines were limited to one commercial repair source—the original equipment manufacturer—whereas two or more private sector repair sources were generally available for commercial counterpart engines. Competition for a particular product or service significantly reduces the government’s costs for products or services. Limited data available regarding contract maintenance costs for military-unique engines indicate that private sector repair is more costly than organic repair. For example, both the Air Force and a public accounting firm recently compared the cost-effectiveness of public versus private depot maintenance for the F404 engine, which powers the F-117 aircraft, and the F118 engine, which powers the B-2 aircraft. In both cases, the public depots were found to be a more cost-effective source of repair than the original engine manufacturers. In the case of the F404, the analysis resulted in the Defense Depot Maintenance Council transferring the engine workload to the Navy depot at Jacksonville, Florida, where the work will be done under an interservice agreement with the Air Force. The accounting firm’s analysis of the F118 confirmed the Air Force’s original source selection of the Oklahoma City depot. These examples indicate that privatization of repair for military-unique engines would likely be more costly than organic repair. The key reason is that this workload is awarded on a sole-source basis to the original equipment manufacturer. We have found that most of DOD’s contract depot maintenance is awarded on a noncompetitive basis and that it is difficult to control costs under these conditions. The large amount of excess capacity in DOD’s depot maintenance system is another factor affecting the cost-effectiveness of contracting out maintenance work. In previous years, war-planning scenarios emphasized a large-scale, full mobilization, but current scenarios emphasize smaller, regional conflicts. This change, combined with reductions-in-force structure, has created significant excess capacity. As a part of DOD’s 1995 base closure and realignment process, the Joint Cross Service Group on Depot Maintenance analyzed the capacity of 24 facilities to maintain and repair 16 commodities. It found that DOD’s depots have over 3 million direct labor hours in excess engine repair capacity. The engine commodity group was identified as being among the five commodities having the greatest excess capacity. Our assessment of engine capacity in military depots identified about 5.1 million direct labor hours—or about 45 percent—excess capacity. Table 2.2 shows our assessment of excess engine capacity in the DOD depot system. As indicated, we found the greatest percent of excess engine capacity at the Corpus Christi Army Depot and Cherry Point Naval Aviation Depot and the smallest percent at the Jacksonville Naval Aviation Depot. The excess capacity in the two Air Force engine depots averages about 42 percent. Actions that increase excess capacity and decrease the utilization of existing depots diminish their cost-effectiveness. For example, an organic depot with several thousand employees may incur fixed overhead costs, including the depot’s share of base support costs, exceeding $100 million annually. When a military depot has excess capacity, moving workload out of this facility and into the private sector will result in increasing the share of overhead expense that all the remaining workload must support—increasing the unit cost for all the units produced by that facility. Thus, moving workload from the military depots to the private sector at a time when the depot system already has large amounts of excess capacity only increases the fixed cost that must be recovered by each direct labor hour of work still done in the public depot. However, despite the existing excess capacity, consolidating the Air Force engine workload at one depot would result in a capacity shortfall. For example, Oklahoma City Air Logistics Center, with a capacity of 4 million direct labor hours, can absorb all but 1 million direct labor hours of the engine workload currently repaired in the San Antonio Air Logistics Center. However, the difference could be managed by making better use of available building space, adding some additional shifts, transferring some engine workloads to the Jacksonville Naval Aviation Depot, which repairs engines for the Navy; or, as discussed in chapter 3 of this report, contracting out additional engine maintenance workload to the private sector. Based on DOD’s calculations, all commercial counterpart engine workloads could be privatized without breaching the 60/40 legislative restriction on contracting out depot maintenance to the private sector. Public-private competitions would be required before privatizing each engine workload, since the value of each engine’s workload exceeds the $3 million threshold provision of 10 U.S.C. 2469. Following this provision should help ensure that privatization would only be undertaken when it is cost-effective to do so. A further consideration should also be the overall cost of operating DOD’s entire depot maintenance system. This is particularly the case in light of the extensive excess capacity for engine repair and overhaul currently existing. It is essential that DOD take each of these factors into consideration to ensure that any privatization initiative meets readiness and cost-effectiveness goals. DOD generally concurred with our analysis of factors influencing the allocation of engine depot maintenance workload between the public and private sectors. However, in commenting on this, and other recently issued reports addressing issues related to public-private competition for depot maintenance work, DOD only partially concurred with our positions regarding future use of public-private competition. DOD officials stated that a November 1994 memorandum from the Deputy Under Secretary of Defense notified depot activities that they can compete for workloads if certain conditions were met. DOD also stated that it will comply with all applicable legislation when making source-of-repair decisions—including the 10 U.S.C. 2469 requirement that prohibits changing workloads valued at $3 million or more from a public depot without using competitive procedures that include both public and private entities. However, DOD also cited its policy that only core workloads should be performed in its depots and noted that it plans to seek legislative relief from the 10 U.S.C. 2469 requirement. DOD actions show that in practice it has not reinstituted public-private competitions. DOD has not conducted a public-private competition since it terminated the program in 1994 and it has not provided guidance to the services for reinstituting public-private competitions. Furthermore, we believe the November 1994 memorandum provided guidance to the services regarding the conditions under which DOD depots could compete for complementary workloads of non-DOD agencies, such as the Federal Aviation Administration’s ground communications equipment. In these circumstances, we continue to believe that DOD has not effectively reinstituted the public-private competition program. Our report includes a recommendation that DOD reinstitute the program and issue guidance regarding the conditions, framework, policies, and procedures for restarting public-private competitions, including the requirement to review the adequacy of the depots to identify and track costs. Since the end of the Cold War and the reduction in new procurements, commercial contractors have aggressively sought more of DOD’s maintenance work. Traditionally, contractors were not interested in military maintenance because it was characterized by sporadic requirements, limited quantities, and other considerations such as proprietary data and older technologies. But, because procurement budgets have begun to decline and relatively few new systems are predicted in the future, the private sector’s interest has begun to increase. DOD has seven engines with civilian counterparts that are good candidates for exploring whether to contract out their maintenance and overhaul. The opportunity appears to be most promising when two factors are present: (1) the military engine has a high degree of similarity with its civilian counterpart and (2) multiple repair (both public and private) sources are able to compete. We did not do a cost analysis to determine whether a private or public source of repair for commercial counterpart engines would be more cost-effective. Rather, we studied these engines to determine if each had the characteristics to make it a good candidate for public-private competition. Excess capacity in the public depots may reduce the cost-effectiveness of privatizing commercial counterpart engine workloads. Prior to the decision to privatize-in-place the San Antonio Air Logistics Center, the closure of one of the largest organic engine overhaul facilities would have allowed DOD to reduce excess capacity, improve the cost-effectiveness of remaining public sector engine repair facilities, and create opportunities to privatize repair of some commercial counterpart engines. Because the planned privatization-in-place will not reduce excess capacity at the remaining engine repair depots, it may not be cost-effective to contract out to the private sector additional engine maintenance, except in limited cases where it would eliminate redundant or duplicate repair capability. Seven engines—T56, 501K, F108/CFM56, T63, T700, TF39, and LM2500—appear to be good candidates for evaluating the cost-effectiveness of privatization by conducting public-private competitions. These engines are very similar to their civilian counterparts and multiple contractors expressed an interest in maintaining or overhauling them. A discussion of each engine is provided in appendix II. The degree of similarity between military and commercial engines can range from 30 percent to 100 percent. For example, the interchangeability of parts between the TF33 and its commercial counterpart can range from 40 to 70 percent, depending on the model being compared. These engine types have a high degree of commonality in their engineering design and require the same repair processes, equipment, and skills to overhaul. For other engine types—T56, 501K, T63, LM2500, T700, F108/CFM56, and F117—the military and commercial versions are nearly identical. According to DOD, there is a logical correlation between the size of the DOD engine fleet relative to the commercial engine fleet and selection of source of depot repair. Where commercial carriers have a significantly larger engine inventory than DOD, there is viable broad-based private sector support available that mitigates risk and affords the opportunity to reduce costs. The competitive environment that exists for these engines allows DOD to benefit from “sharing” fixed-overhead costs with the private sector customers who have substantially larger numbers of engines being serviced. Commercial carriers have significantly larger engine inventory for 5 of the 10 engines—TF39, T63, F108/CFM56, 501K, and F117—than does DOD, as shown in table 3.1. Commercial carriers have less than 50 percent of the inventory for three types of engines—the T56, LM2500, and T700—which still appear to be good candidates for public-private competition. These engines have multiple sources of repair in the private sector, and DOD in the past has contracted with the private sector for repair of some of these engines. For reasons previously mentioned, the TF33 and TF34 engines do not appear to be good candidates for competition. To determine if private repair facilities would be interested in and capable of maintaining and overhauling military engines with commercial counterparts, we surveyed 24 private companies with turbine engine repair capability. These companies included 2 engine manufacturers, 5 airlines, and 17 independent repair activities. Of these 24, 18 were interested, and 10 of these either were repairing or had repaired the military engine or its commercial counterpart. The contractors we surveyed were interested in working on nine commercial counterpart engines. In most cases, they had sufficient capacity to absorb the additional work. The survey showed the following: Of the 24 repair activities we contacted, 18 were interested in repairing 1 or more of the 10 military engines with commercial equivalents. The other six contractors were either not interested in repairing military engines or did not have the capability to repair whole engines. The interested companies have repaired or are repairing commercial counterparts. All of the 18 repair activities already repair military engines or their commercial counterpart for the military services, foreign countries, or commercial carriers. Seven of the 10 military engines have commercial sources of repair. These are the T56, 501K, LM2500, T63, T700, F117, and CFM56 engines. The other three—TF33, TF39, and TF34—have repair sources for their commercial counterparts—the JT3D, CF6, and CF34 engine. We compared the capacity reported by the private sector to the services’ projected workload for fiscal year 1997. Table 3.2 provides the results of our survey. When compared to the services’ projected fiscal year 1997 workload, the contractors had more than enough reserve capacity to overhaul 6 of the 10 engines. The private repair activities reported sufficient reserve capacity to accomplish all of the projected depot workloads for six military engines: TF39, TF33, T63, F108/CFM56, 501K, and LM2500. They reported sufficient reserve capacity to perform 75 percent of the military’s T56 workload and 73 percent of its T700 engine workload. However, they reported little interest or available capacity to repair the TF34 engine. Private firms also reported sufficient capacity to handle the military F117 engine workload. The C-17 aircraft and its F117 engine are currently under commercial depot contract until 1997. Because of the absence of interest in the TF34 engine, it does not appear to be a good candidate for privatization. Additionally, because of declining use in the commercial market as well as declining repair sources, the TF33 also does not appear to be a good candidate. The LM2500, a ship propulsion version of the TF39 engine, is used to power Navy cruisers, frigates, and destroyers. With the exception of the TF39 high bypass fan section, the two engines are very similar—with about 35 percent of the LM2500 parts interchangeable with TF39 parts. Other parts and components, although not interchangeable, are similar in design and require the same types of maintenance equipment and artisan skills to repair. Currently, both engines are repaired in public depots. The TF39 is repaired by the San Antonio Air Logistics Center, and the LM2500 is repaired by North Island Naval Aviation Depot. In addition, three private repair activities, including General Electric, reported interest and capability to repair the LM2500 engine. All three sources are repairing the LM2500 for commercial industry, and they have a reserve capacity capable of performing almost six times the projected fiscal year 1997 workload. As early as 1978, we reported that consolidating the LM2500 with the TF39 workload at the San Antonio Air Logistics Center would result in savings.We found that the Navy’s decision to equip the North Island Naval Aviation Depot to repair the LM2500 reflected the services’ reluctance to share depot maintenance, even though such actions created duplicate maintenance capability. Since then, however, North Island has lost all of its turbine engine workload except the LM2500, and as a result, the repair costs of the LM2500 have steadily increased from $443,678 in 1990 to $925,200 in 1995. Naval Sea Systems Command officials believe the costs have increased because the LM2500 is a relatively small workload and is the only turbine engine North Island currently repairs. The 1995 BRAC Commission added the San Antonio Air Logistics Center to the list of depots to be considered for closure and realignment. The Air Force initially recommended downsizing all five Air Force depots by mothballing excess space and did not recommend closing any maintenance depots. However, the Commission found that the significant excess capacity and infrastructure in the Air Force depot system required the closure of the San Antonio center. The Commission’s recommendation provided that DOD should consolidate the center’s maintenance workloads at other DOD depots or contract them out to private contractors as determined by the Defense Depot Maintenance Council. The Commission estimated savings from the implementation of this recommendation at $178.5 million annually. The closure of the San Antonio depot would create the need for reassigning the source of repair for the T56, 501K, and TF39 commercial counterpart engines as well as the military-unique F100 engine workloads maintained at this depot. The closure of the depot, along with the ready availability of commercial repair sources, would have made the T56, 501K, and TF39 engines potential candidates for privatization through public-private competition. However, in approving the BRAC recommendations President Clinton directed that the workload of the San Antonio Air Logistic Center be privatized-in-place or in the local community. According to DOD officials, they are developing plans to privatize workloads—including engines—in San Antonio, as part of a plan to retain over 16,000 jobs in that city. Until the administration decided to privatize the workload in San Antonio, the BRAC’s recommendation to close the San Antonio Air Logistics Center offered potential opportunities to improve the cost-effectiveness of DOD’s depot activities by consolidating engine repair at other DOD depots. Based on data provided by the Air Force, consolidating San Antonio’s engine workload could have reduced the overhead rate for engine workload at the remaining depot by as much as $10 per hour. Moreover, the remaining Air Force repair depot could not absorb all of the San Antonio engine workload, which would have created opportunities to privatize some commercial counterpart engine workloads. The Air Force could have also considered outsourcing commercial counterpart engines at its remaining engine depot, such as the CFM56 and TF33 engine, to free up capacity to repair military-unique or more mission-essential engines, such as the F100 or TF39 engines. Under the administration’s proposed plan to privatize-in-place, the Air Force may not be able to move any work from San Antonio to other engine depots or allow private contractors to bid for workloads that they could have otherwise moved to facilities located outside the San Antonio area. Consequently, the plan will have little impact on reducing the excess capacity and improving the cost-effectiveness of remaining depots. Since the remaining depots will continue to be burdened with excess capacity, moving additional engine workloads from these facilities to the private sector would only increase the fixed costs that must be recovered by each direct labor hour of work still done in the public depot. Therefore, the potential for cost-effective privatization of additional engine workloads may be limited to situations where DOD is maintaining redundant or duplicative depot capabilities for the same or similar engines with commercial counterparts. Such is the case with the LM2500 engine. Whether or not to maintain DOD facilities for depot maintenance of military systems and equipment, such as engines, is a policy decision that must be made by Congress and DOD. The current policy is to maintain core capabilities in the military depot maintenance system. We agree that there are valid arguments to support that policy. However, it is not clear how much core capability is required or to what extent cost-effectiveness should be a consideration in the decision-making process. Nonetheless, we believe cost-effectiveness should be a key part of this decision-making process. Generally, commercial counterpart engines are excellent candidates for privatization, particularly those with high degrees of commonality in parts and repair processes and those where the private sector has a significant share of the total engine population. The existence of multiple sources of repair provides increased opportunity for competitive outsourcing of repair while lessening the operational risk inherent when only a single private source of repair is available. Our review of DOD’s commercial counterpart engine repair program supports the potential for privatizing much of this work. However, while the potential exists to privatize additional commercial counterpart engine workloads, it may not be cost-effective to do so without reducing the large excess capacity and overhead that already exists in DOD’s engine depot maintenance structure. Privatization of additional engine work would further exacerbate the severe engine excess capacity problem and the cost of maintaining engines at the remaining military depots. Without a reduction in excess capacity, it is not likely that planned savings from privatization can be achieved. Prior to the administration’s decision to privatize the workload, recommended closure of one of the two major Air Force engine depots offered the potential to improve the efficiency of the remaining engine depots as well as to evaluate the cost-effectiveness of privatizing additional commercial counterpart engine workloads through public-private competitions. If core military-unique workloads from a closing activity are transferred to another public depot with proven capability to perform the work, DOD could not only save costs from the elimination of unneeded infrastructure, but also from the economies resulting from the consolidation of engine workloads and improved utilization of remaining engine facilities. Because the administration plans to privatize-in-place the San Antonio engine workload, the remaining engine depots will continue to have severe excess capacity and any additional privatization of their commercial counterpart work would increase the per-unit cost of remaining engine work in those depots. Thus, with the exception of the LM2500 engine, we believe it may not be cost-effective to privatize commercial counterpart engine workloads from other engine depots at this time. It does not appear to be cost-effective to maintain only one engine line at the North Island Naval Aviation depot, particularly since another engine in the same family of engines is maintained at another DOD depot. The LM2500 workload can probably be performed more cost-effectively by the private sector or through consolidation with the TF39. A public-private competition would be a good choice for determining the most cost-effective source of repair for this engine. Congress may wish to consider requiring DOD to report its plan for privatizing-in-place the engine workload at the San Antonio Air Logistics Center. The plan should include DOD’s strategy for determining the source of repair for engine workloads currently at the San Antonio Air Logistics Center and a discussion of the cost-effectiveness of the various repair alternatives, including transferring the workload to other military depots and privatization-in-place. We recommend that the Secretary of Defense: Require the Secretary of the Air Force to assess the cost-effectiveness of various alternatives for allocating engine workload from the San Antonio Air Logistics Center between the public and private sectors, including privatization-in-place and transferring engine workloads to other military depots. Develop a plan for reducing excess engine capacity and improving the utilization of military depots not identified for closure. This plan should address how DOD intends to (1) comply with the existing law regarding the use of competitive procedures that include public and private entities when changing depot maintenance workloads to the private sector and (2) reduce excess engine capacity at other DOD engine depots in light of planned privatization. Require the Secretary of the Navy to conduct a public-private competition for the LM2500 engine workload. DOD officials generally concurred with our analysis, conclusions, and recommendations regarding privatization opportunities for commercial counterpart engines. Air Force officials said that they plan to assess the cost-effectiveness of various alternatives for allocating engine workload from the San Antonio depot among the public and private sector prior to deciding what engine workloads will be privatized-in-place. The Air Force plans to compute its core maintenance requirements by January 1996 using a methodology that includes a privatization risk assessment. If existing commercial capabilities are an acceptable risk, then the core requirements will be reduced accordingly. However, workloads necessary to sustain the Air Force’s core logistics engine maintenance capability will be transferred to the remaining DOD depots. Air Force officials stated that they believe competitive private sector sources (potentially including privatization-in-place) will likely provide the best alternative for cost-effective accomplishment of above-core engine workloads. We noted that the Air Force explanation did not consider the impact of a privatization-in-place decision on the cost of other engine workloads repaired in remaining military depots and did not address the need to conduct competitive procedures that include remaining public depots. DOD concurred with our recommendation to develop a plan for reducing excess capacity and improving the utilization of military depots not identified for closure. DOD officials stated that they recognize additional privatization will aggravate the already serious excess capacity problems at the remaining engine depots and that there is a need for developing a plan for dealing with this problem. DOD officials agreed to reassess the source-of-repair of the LM2500 engine but did not say they would conduct a public-private competition. These officials noted that the Navy has already undertaken a study to evaluate the cost-effectiveness of outsourcing the LM2500 engine versus continuing to repair the engine at North Island Naval Aviation depot. That study will consider engine repair costs, repair cycle times, and the potential impact of the Navy’s emerging regional maintenance concept. While the study’s approach may provide some useful information to Navy business planners, it does not replace the need to comply with the requirement to conduct competitive procedures that include public depots before privatizing the North Island LM2500 workload.
GAO examined the Department of Defense's (DOD) depot maintenance program, focusing on whether the: (1) DOD depots should retain their engine repair capabilities; (2) opportunities exist to privatize additional engine repair workloads; and (3) excess capacity within the DOD depot system adversely affects privatization decisions. GAO found that: (1) DOD maintains its engine repair capability in the public depot system to comply with statutory requirements and to reduce the costs and readiness risks associated with private-sector repairs; (2) most companies have the capacity to absorb additional military workloads, but doing so would increase per-unit repair costs; (3) the decision to realign Kelly Air Force Base and to close the San Antonio Air Logistics Center allows DOD to reduce excess engine capacity, improve the cost-effectiveness of its remaining engine repair facilities, and privatize additional commercial counterpart engine work; and (4) the decision to keep the depot open by privatizing its workload will limit or preclude any reduction in excess depot capacity and associated overhead costs.
Congress established the trade advisory committee system in Section 135 of the Trade Act of 1974 as a way to institutionalize domestic input into U.S. trade negotiations from interested parties outside the federal government. This system was considered necessary because of complaints from some in the business community about their limited and ad hoc role in previous negotiations. The 1974 law created a system of committees through which such advice, along with advice from labor and consumer groups, was to be sought. The system was originally intended to provide private sector input to global trade negotiations occurring at that time (the Tokyo Round). Since then, the original legislation has been amended to expand the scope of topics on which the President is required to seek information and advice from “negotiating objectives and bargaining positions before entering into a trade agreement” to the “operation of any trade agreement, once entered into,” and on other matters regarding administration of U.S. trade policy. The legislation has also been amended to include additional interests within the advisory committee structure, such as those represented by the services sector and state and local governments. Finally, the amended legislation requires the executive branch to inform the committees of “significant departures” from their advice. The Trade Act of 1974 required the President to seek information and advice from the trade advisory committees for trade agreements pursued and submitted for approval under the authority granted by the Bipartisan Trade Promotion Authority Act of 2002. The Trade Act of 1974 also required the trade advisory committees to provide a report on the trade agreements pursued under the Bipartisan Trade Promotion Authority Act of 2002 to the President, Congress, and USTR. This requirement lapsed with TPA on June 30, 2007. The trade advisory committees are subject to the requirements of the Federal Advisory Committee Act (FACA), with limited exceptions pertaining to holding public meetings and public availability of documents. One of FACA’s requirements is that advisory committees be fairly balanced in terms of points of view represented and the functions the committees perform. FACA covers most federal advisory committees and includes a number of administrative requirements, such as requiring rechartering of committees upon renewal of the committee. Four agencies, led by USTR, administer the three-tiered trade advisory committee system. USTR directly administers the first tier overall policy committee, the President’s Advisory Committee for Trade Policy and Negotiations (ACTPN), and three of the second tier general policy committees, the Trade Advisory Committee on Africa (TACA), the Intergovernmental Policy Advisory Committee (IGPAC), and the Trade and Environment Policy Advisory Committee (TEPAC), for which the Environmental Protection Agency also plays a supporting role. The Department of Labor co-administers the second tier Labor Advisory Committee (LAC) and the Department of Agriculture co-administers the second tier Agricultural Policy Advisory Committee (APAC). The Department of Agriculture also co-administers the third tier Agricultural Technical Advisory Committees (ATACs), while the Department of Commerce co-administers the third tier Industry Trade Advisory Committees (ITACs). Ultimately, member appointments to the committees have to be cleared by both the Secretary of the managing agency and the U.S. Trade Representative, as they are the appointing officials. Figure 1 illustrates the committee structure. Our 2002 survey of trade advisory committee members found high levels of satisfaction with many aspects of committee operations and effectiveness, yet more than a quarter of respondents indicated that the system had not realized its potential to contribute to U.S. trade policy. In particular, we received comments about the timeliness, quality, and accountability of consultations. For example, the law requires the executive branch to inform committees of “significant departures” from committee advice. However, many committee members reported that agency officials informed committees less than half of the time when their agencies pursued strategies that differed from committee input. As a result, we made a series of recommendations to USTR and the other agencies to improve those aspects of the consultation process. Specifically, we recommended the agencies adopt or amend guidelines and procedures to ensure that (1) advisory committee input is sought on a continual and timely basis, (2) consultations are meaningful, and (3) committee advice is considered and committees receive substantive feedback on how agencies respond to their advice. In response to those recommendations, USTR and the other agencies made a series of improvements. For example, to improve consultations between the committee and the agencies, including member input, USTR and TEPAC members established a communications taskforce in 2004. As a result of the taskforce, USTR and EPA changed the format of principals’ meetings to allow more discussion between the members and senior U.S. government officials, and they increased the frequency of liaison meetings. In addition, USTR instituted a monthly conference call with the chairs of all committees, and now holds periodic plenary sessions for ATAC and ITAC members. Furthermore, the agencies created a new secure Web site to allow all cleared advisors better access to important trade documents. When we interviewed private sector advisory committee chairs again in 2007, they were generally pleased with the numerous changes made to the committee system in response to our 2002 report. In particular, they found the secure Web site very useful. Reviews of the monthly chair conference call and plenary sessions were mixed, however. Chairs told us that their out-of-town members might find the plenaries a helpful way to gain an overall perspective and to hear cabinet-level speakers to whom they would not routinely have access, whereas others found them less valuable, largely due to the perceived lack of new or detailed information. The chairs also said that USTR and the relevant executive branch agencies consulted with the committees on a fairly regular basis, although overall views on the opportunity to provide meaningful input varied. For example, we heard from committee chairs who felt the administration took consultations seriously, while other chairs felt the administration told them what had already been decided upon instead of soliciting their advice. USTR officials told us that the fact that the advice of any particular advisory committee may not be reflected in a trade agreement does not mean that the advice was not carefully considered. In 2002, we found that slow administrative procedures disrupted committee operations, and the resources devoted to committee management were out of step with required tasks. In several instances, for example, committees ceased to meet and thus could not provide advice, in part because the agencies had not appointed members. However, the length of time required to obtain a security clearance contributed to delays in member appointment. To address these concerns, we recommended the agencies upgrade system management; and in response, they began to grant new advisors interim security clearances so that they could actively participate in the committee while the full clearance is conducted. Despite these actions, however, trade advisory committee chairs we contacted in 2007 told us certain logistics such as delays in rechartering committees and appointment of members still made it difficult for some committees to function effectively. We found several committees had not been able to meet for periods of time, either because agencies allowed their charters to lapse or had not started the process of soliciting and appointing members soon enough to ensure committees could meet once they were rechartered. The Labor Advisory Committee, for example, did not meet for over 2 years from September 2003 until November 2005 due in part to delays in the member appointment process. These types of process delays further reduced a committee’s ability to give timely, official advice before the committee was terminated, and the rechartering process had to begin again. This was particularly true in the case of the Labor Advisory Committee, which, at the time of our 2007 report, still had a 2- year charter. To address these concerns, we recommended that USTR and other agencies start the rechartering and member appointment processes with sufficient time to avoid any lapse in the ability to hold committee meetings and that they notify Congress if a committee is unable to meet for more than 3 months due to an expired charter or delay in member appointments. Furthermore, we recommended that USTR work with the Department of Labor to extend the Labor Advisory Committee’s charter from 2 years to 4 years, to be in alignment with the rest of the trade advisory committee system. USTR and the other agencies have taken some steps to address these recommendations. In May 2008, for example, the Labor Advisory Committee’s charter was extended to 4 years. Not enough time has passed, however, to assess whether steps taken fully address the problems associated with rechartering and member appointment, since at present all committees have current charters and members appointed. Furthermore, even though committees are now chartered and populated, some of them have not met for over three years, despite ongoing negotiations of the Doha Round of the World Trade Organization (WTO), including the July 2008 ministerial meeting in Geneva. For example, although the ATAC charters were renewed in May 2007 and members appointed in January 2008, the FACA database shows that no ATAC has held a meeting since fiscal year 2006. In addition, although USTR held multiple teleconferences for all first and second tier advisors in fiscal year 2008, LAC and APAC members did not participate. It is unclear, therefore, whether the administration received official advice from all trade advisory committees for the Doha negotiations. In addition to the need to improve certain committee logistics, we also found that representation of stakeholders is a key component of the trade advisory committee system that warrants consideration in any review of the system. In particular, as the U.S. economy and trade policy have shifted, the trade advisory committee system has needed adjustments to remain in alignment with them, including both a revision of committee coverage as well as committee composition. In our 2002 report, we found that the structure and composition of the committee system had not been fully updated to reflect changes in the U.S. economy and U.S. trade policy. For example, representation of the services sector had not kept pace with its growing importance to U.S. output and trade. Certain manufacturing sectors, such as electronics, had fewer members than their sizable trade would indicate. In general, the system’s committee structure was largely the same as it was in 1980, even though the focus of U.S. trade policy had shifted from border taxes (tariffs) toward other complex trade issues, such as protection of intellectual property rights and food safety requirements. As a result, the system had gaps in its coverage of industry sectors, trade issues, and stakeholders. For example, some negotiators reported that some key issues such as investment were not adequately covered. In addition, nonbusiness stakeholders such as environment and labor reported feeling marginalized because they have been selected to relatively few committees. The chemicals committee, representing what at the time was one of the leading U.S. export sectors, had been unable to meet due to litigation over whether the apparent denial of requests by environmental representatives for membership on the committee was consistent with FACA’s fair balance requirements. In 2007, several committee chairs we interviewed also expressed the perception that the composition of their committees was not optimal, either favoring one type of industry or group over another or industry over nonbusiness interests. Furthermore, some members were the sole representative of a nonbusiness interest on their committee, and those we spoke with told us that although their interest was now represented, they still felt isolated within their own committee. The result was the perception that their minority perspective was not influential. At the same time, while Congress mandates that the advisory committee system is to involve representative segments of the private sector (e.g., industry, agriculture, and labor and environmental groups), adherence to these statutory requirements has been deemed non-justiciable. For example, although the Departments of Agriculture and Commerce solicit new members for their committees through Federal Register notices which stipulate members’ qualifications, including that they must have expertise and knowledge of trade issues relevant to the particular committees, neither the notices nor the committee charters explained how the agencies would or have determined which representatives they placed on committees. Without reporting such an explanation, it was not transparent how agencies made decisions on member selection or met statutory representation requirements. As a result, we made a series of recommendations suggesting that USTR work with the other agencies to update the system to make it more relevant to the current U.S. economy and trade policy needs. We also suggested that they seek to better incorporate new trade issues and interests. Furthermore, we recommended they annually report publicly on how they meet statutory representation requirements, including clarifying which interest members represent and explaining how they determined which representatives they placed on committees. In response, USTR and the other agencies more closely aligned the system’s structure and composition with the current economy and increased the system’s ability to meet negotiator needs more reliably. For example, the Department of Agriculture created a new ATAC for processed foods because exports of high-value products have increased. USTR and Commerce also split the service industry into several committees to better meet negotiator needs. Furthermore, USTR and the Department of Agriculture now list which interest members represent on the public FACA database, as the Department of Commerce has been doing for years. USTR’s 2009 Trade Policy Agenda and 2008 Annual Report also includes descriptions of the committees and their composition. It does not, however, explain how USTR and the agencies determined that the particular membership appointed to each committee represents a fair balance of interests in terms of the points of view represented and the committee’s functions. Mr. Chairman, we appreciate the opportunity to summarize our work related to the Trade Advisory System. Based on the recommendations we have made in the areas of quality and timeliness of consultations, logistical issues, and representation of key stakeholders, we believe that USTR and other managing agencies have strengthened the Trade Advisory System. However, we support the Committee’s oversight and the ongoing policy review of the system to ensure that it works smoothly and the input received from business and non-business stakeholders is sufficient, fairly considered, and representative. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony provides a summary of key findings from the comprehensive report on the trade advisory system that we provided to the Congress in 2002, as well as from our more recent report in 2007 on the Congressional and private sector consultations under Trade Promotion Authority. In particular, this testimony highlights our recommendations in three key areas--committee consultations, logistics, and overall system structure--as well as the changes that have been made by the U.S. agencies since those reports were published. Our 2002 survey of trade advisory committee members found high levels of satisfaction with many aspects of committee operations and effectiveness, yet more than a quarter of respondents indicated that the system had not realized its potential to contribute to U.S. trade policy. In particular, we received comments about the timeliness, quality, and accountability of consultations. For example, the law requires the executive branch to inform committees of "significant departures" from committee advice. However, many committee members reported that agency officials informed committees less than half of the time when their agencies pursued strategies that differed from committee input. In 2002, we found that slow administrative procedures disrupted committee operations, and the resources devoted to committee management were out of step with required tasks. In several instances, for example, committees ceased to meet and thus could not provide advice, in part because the agencies had not appointed members. However, the length of time required to obtain a security clearance contributed to delays in member appointment. To address these concerns, we recommended the agencies upgrade system management; and in response, they began to grant new advisors interim security clearances so that they could actively participate in the committee while the full clearance is conducted. Despite these actions, however, trade advisory committee chairs we contacted in 2007 told us certain logistics such as delays in rechartering committees and appointment of members still made it difficult for some committees to function effectively. We found several committees had not been able to meet for periods of time, either because agencies allowed their charters to lapse or had not started the process of soliciting and appointing members soon enough to ensure committees could meet once they were rechartered. The Labor Advisory Committee, for example, did not meet for over 2 years from September 2003 until November 2005 due in part to delays in the member appointment process. These types of process delays further reduced a committee's ability to give timely, official advice before the committee was terminated, and the rechartering process had to begin again. This was particularly true in the case of the Labor Advisory Committee, which, at the time of our 2007 report, still had a 2-year charter. In addition to the need to improve certain committee logistics, we also found that representation of stakeholders is a key component of the trade advisory committee system that warrants consideration in any review of the system. In particular, as the U.S. economy and trade policy have shifted, the trade advisory committee system has needed adjustments to remain in alignment with them, including both a revision of committee coverage as well as committee composition.
Low-power television stations, as indicated by their name, operate at lower power levels and transmit over a smaller area than full-power television stations. Low-power television station licensees can include municipalities, universities, nonprofit groups, and small businesses. The development of low-power television stations has evolved since FCC’s 1956 order allowing licensing of low-power translator stations. FCC has maintained that most low-power television service is a “secondary service,” meaning low-power television stations may not cause interference to, and must accept interference from, full-power television stations, which are classified as a “primary service.” When interference cannot be remedied by adjusting an antenna or other technological methods, low-power television stations must vacate the channel. In such cases, low-power television stations can submit a displacement application to FCC requesting permission to move to another channel or they can request permission to turn off their broadcast signal while searching for another channel. Cable and satellite providers are generally not required to carry signals from low-power television stations, but some low-power television stations are carried by cable or satellite systems in situations where the low-power station wants to be carried and the cable or satellite provider decides to carry it. FCC uses the term “low-power television stations” to collectively refer to three types of stations: (1) translator stations; (2) low-power television stations that are not translator or Class A stations, which FCC refers to as LPTV stations; and (3) Class A stations. To ensure consistency with FCC’s terminology, we are using the term “low-power television stations” to refer to all three types of stations.  Translator stations: Translator stations retransmit programming from a primary station, such as a major network (ABC, CBS, FOX, or NBC) or its affiliate, to audiences unable to receive the signal directly from the primary station, usually because of distance or terrain barriers (mountains) that limit the signal’s ability to travel long distances. FCC rules prohibit translators from originating any programming.  LPTV stations: As with translator stations, FCC’s rules allow stations with LPTV licenses to retransmit another station’s signals, but the rules also allow LPTV stations to originate programming.  Class A stations: Unlike translator stations and LPTV stations, Class A stations are classified as a primary service. When Congress passed the Community Broadcasters Protection Act of 1999 (CBPA), it provided the existing LPTV stations a onetime opportunity to apply for a special primary status that gave the stations some interference protection from full-power stations, thereby limiting the instances in which Class A stations could be displaced by full-power stations. To qualify for Class A status, an LPTV station was required, during the 90 days prior to enactment of CBPA on November 29, 1999, to  have broadcast a minimum of 18 hours per day,  have broadcast an average of at least 3 hours of locally produced programming per week, and  be in compliance with FCC’s requirements for low-power television stations. Stations that applied for and received Class A status must meet requirements that are not applied to other low-power television stations, such as broadcasting an average of at least 3 hours per week of locally produced programming. Digital broadcasting provides clearer pictures and sound than analog broadcasting. Analog signals fade with distance, so consumers living farther from a television tower may experience degraded audio and video. With digital technology, pictures and sounds are converted into a stream of digits consisting of zeros and ones. Although digital signals also fade with distance, techniques can be applied to maintain and improve the quality of the broadcast so that pictures and sound generally retain their quality. To transition from analog to digital broadcasts, existing low-power television stations must take the following steps:  Apply to FCC for a construction permit for a digital “flash cut,” digital companion channel, or digital displacement. A flash cut means the station will simultaneously turn off its analog signal and turn on its digital signal, using its current analog channel as its new digital channel. A digital companion channel means a different channel is being used for the digital channel; thus, a station could operate its analog and digital transmissions concurrently on different channels until it decides to cease analog broadcasts. A digital displacement would mean that a station is moving to another channel and is transitioning to digital. It is similar to a flash cut in that the station will simultaneously turn off its analog signal and turn on its digital signal, but it will be using a different channel for the digital signal.  Construct its digital facilities by September 1, 2015.  Apply to FCC for a digital license upon completing the construction of its digital facilities. The station may begin broadcasting digitally while FCC is processing its license application. By completing the digital transition of low-power television stations, FCC will be able to reclaim spectrum being used by the stations that are broadcasting in both analog and in digital (on a companion channel). In addition, FCC has stated that having low-power television stations complete their digital transition will simplify FCC’s efforts to reallocate broadcast spectrum for broadband purposes, since there will be more certainty regarding which channels low-power television stations are using for their digital operations. The federal government has established some funding for low-power television stations to transition to digital. Congress created the Low- Power Television and Translator Upgrade Program, through which NTIA made $44 million available to eligible rural stations for reimbursement of equipment costs related to the transition from analog to digital service. In addition, some public television low-power facilities used NTIA’s Public Telecommunications Facilities Program and USDA’s Public Television Station Digital Transition Grant Program to fund some of the costs for transitioning to digital. As previously noted, we are using the term “low-power television stations” to refer to translators, LPTV stations, and Class A stations. These stations provide programming to communities throughout the United States and its territories. Translators: Over half of the roughly 6,400 low-power television stations are translators. According to FCC data, there are about 3,900 translators located across the country, as shown in figure 1. Translators tend to be concentrated in both rural and mountainous areas. Translator stations may be part of publicly owned systems that retransmit television signals to areas that cannot receive signals from full-power stations because they are too far away, or because terrain blocks the signals. In such cases, translators may be the only source of free over-the-air programming from nearby full-power television stations, including network programming, public broadcasting, and emergency alerts. For example, a translator association in Colorado is a publicly owned system of stations funded by local taxes that provides the only over-the-air television service in the area, retransmitting a number of satellite and regional full-power stations’ signals to rural communities surrounded by mountainous terrain. Many viewers in this area cannot otherwise receive over-the-air television signals from regional network broadcasters because of long distances and rugged terrain. Some translators are part of a “daisy chain,” in which multiple translators relay signals from one translator to another, allowing the originating station’s signal to be received a few hundred miles away. Translators receive programming on an input channel, and retransmit the signal on an output channel, meaning two channels are used per station, as shown in figure 2. In cases where multiple stations’ signals are being retransmitted, each station would require separate incoming and outgoing channels. For example, a translator system in Utah retransmits signals for several Salt Lake City stations. Because each site in Utah’s translator system must use 2 separate, nonadjacent channels to successfully retransmit each station’s signal with minimal interference, a site in the system retransmitting nine stations’ signals would need 18 channels. LPTV and Class A stations: According to FCC data, there are about 2,000 LPTV stations and about 500 Class A stations. As shown in figure 3, these stations are located in both rural and urban areas throughout the country. FCC does not require broadcasters to submit programming information, with limited exceptions, so it is difficult to report on the specific types of programming provided by low-power television stations. However, based on our interviews and reviews of documentation, it is evident that some LPTV and Class A stations provide foreign-language, religious, educational (e.g., programming from a university or local school system), and home shopping programming. For example, one licensee we contacted owns and operates several full-power, Class A, and LPTV stations that air Spanish-language programming as an affiliate group for the Telemundo network. This licensee told us that its Class A stations in Washington, D.C., and Orlando, Florida, air daily local news broadcasts. They also hold annual community events, providing the opportunity for face-to-face interactions between station personnel and the community. As previously noted, LPTV stations can act as translators by retransmitting programming from a primary station, so some LPTV stations may actually be serving the function of a translator, and the number of such stations is unknown. For example, a licensee told us that a LPTV station in Colorado acts primarily as a translator, but occasionally overrides the system’s broadcast with its own broadcasts of local high- school sporting events, community meetings, and other events. More than half of low-power television stations have taken steps to transition to digital. Low-power television station representatives we spoke with cited a number of benefits to broadcasting in digital, including improved picture and sound quality and improved broadcast coverage. Another significant benefit of digital is the ability to broadcast multiple program streams through one 6-MHz channel, known as multicasting. For example, a Class A station serving San Francisco and San Jose, California, uses digital multicasting to provide 12 streams of television programming on digital subchannels, including local broadcasts in Vietnamese, Tagalog, Mandarin, Hindi, Punjabi, and Spanish. Once stations have received a digital construction permit from FCC, the actions the stations must take to transition their existing facilities to digital vary, depending on the characteristics of individual stations, their locations, and the markets they serve. For example, some stations may need to update transmitter equipment to carry a digital signal in place of an analog signal. Some stations, particularly those that must broadcast from a new channel, will need to conduct an engineering analysis to identify available spectrum and may need to purchase a new transmitter and antenna equipment. Once stations have completed construction of their FCC-approved digital facilities, they must apply for a license to broadcast in digital. According to FCC’s data as of July 2011, about 29 percent of low-power television stations had completed the digital transition. Figure 4 displays the percentage of all low-power television stations that have completed various steps or have taken no action in transitioning to digital. However, the progress toward transitioning varies by the different types of low-power television stations, as shown in figure 5. Translator stations— about 35 percent of which have fully transitioned to digital—have made the most progress in transitioning to digital, compared with about 19 percent of Class A stations and about 20 percent of LPTV stations. About 46 percent of LPTV stations have taken no action to transition to digital, compared with about 36 percent of translators and about 37 percent of Class A stations. Figure 5 shows the progress in transitioning to digital by type of low-power television station, as of July 2011. FCC previously allowed low-power stations to apply for digital facilities, and recently established a deadline of September 1, 2015, for low-power television stations to cease analog operations and convert to digital broadcasting. In 2004, FCC announced that the statutorily established deadline for full-power television stations to transition to digital did not apply to low-power television stations. In explaining its decision, FCC noted that it did not have sufficient spectrum to give all full-power and low-power television stations digital companion channels, and raised concerns that forcing low-power television stations to transition to digital via flash cuts would result in a loss of service to viewers. FCC stated that it would set a low-power digital transition deadline sometime after the full- power transition was complete (which happened in 2009), but did not preclude existing low-power television stations from transitioning to digital earlier. In 2005, FCC began accepting applications from existing LPTV and translator stations that wanted to transition to digital by using a flash cut. FCC subsequently opened two filing windows in 2006 and 2009 during which existing low-power television stations could apply for digital companion channels. On October 28, 2010, citing the uncertainty posed by the potential reallocation of spectrum from broadcasting to broadband purposes and the potential impact on low-power licensees, FCC announced a freeze on applications for new digital low-power television stations. However, FCC is still accepting digital flash cut, digital displacement, and digital companion channel applications from existing analog low-power television stations. In September 2010, FCC issued a Further Notice of Proposed Rulemaking that requested public comment on potential deadlines and proposed rules for the digital transition of low-power television stations. In the notice, FCC proposed establishing a deadline of sometime in 2012 for low-power television stations to cease analog operations, but also requested comment on whether a later date would be more feasible. The majority of the comments from low-power television licensees stated that a 2012 deadline was not feasible, and some cited the need for additional time to raise funds, receive FCC approval of their applications, and buy and install equipment. In July 2011, FCC issued an order establishing a deadline of September 1, 2015, for low-power television stations to cease analog operations and convert to digital broadcasting. FCC also stated that it would allow low-power television stations to file for one 6-month extension to finish completion of their digital facilities by March 1, 2016, but that the stations must cease their analog broadcasts by the September 1, 2015, deadline. FCC’s order also adopted prior proposals to allow low-power television stations to use full-power emission masks, which could help some stations more easily secure a channel by filtering the station’s signal and reducing potential interference, and to increase the power levels for low-power television stations using VHF channels. Our interviews with low-power television stations and other industry stakeholders indicated that the most significant challenge faced by low- power television stations is a result of regulatory uncertainty surrounding FCC’s proposed spectrum reallocation. Furthermore, although FCC’s July 15, 2011, Report and Order establishes a process for Class A stations to transfer their protected status to their digital companion channel, the lack of such a process had previously posed challenges for some stations in their transition to digital. FCC’s proposed spectrum reallocation: Several licensees reported, both in speaking with us and in written comments submitted to FCC, that the regulatory uncertainty created by FCC’s proposed spectrum reallocation has negatively affected their ability to transition to digital. One of the recommendations of the National Broadband Plan was for FCC to initiate a rulemaking proceeding to reallocate 120 MHz of spectrum (equivalent to 20 television channels) from television broadcasting to wireless broadband to help meet the nation’s increasing demand for broadband service. To begin the process of freeing this spectrum, FCC issued a Notice of Proposed Rulemaking in November 2010 that discussed using a variety of tools, including incentive auctions where broadcasters could volunteer to relinquish their spectrum in exchange for a portion of the incentive auction proceeds—which would require congressional approval—and channel sharing, meaning two or more previously distinct stations split the use of one channel and its 6 MHz of bandwidth. FCC would then “repack” broadcasters into a smaller number of channels and auction a contiguous band of newly cleared spectrum for wireless broadband uses. Given that the spectrum reallocation proceeding is in its preliminary phase, it has not yet been decided how the proposed reallocation would affect low-power television stations or if those stations would be able to participate in incentive auctions or channel sharing. Low-power licensees and industry representatives told us that FCC’s proposal for reallocating broadcast spectrum for broadband purposes created a significant amount of regulatory uncertainty about the fate of low-power television stations. Some licensees believe that FCC should not require low-power television stations to complete their transition to digital until after any spectrum reallocation is completed, when there will be more clarity regarding what channels are available for low-power broadcasters. Low-power licensees are particularly concerned that if full- power stations are repacked into new channels, low-power television stations could (1) be displaced if they cause interference to a relocated full-power station, and (2) find that there are no available channels where they could move to avoid interference. This is a concern for stations located in urban markets where spectrum is already scarce and for translator stations in rural areas that use several channels. As previously noted, Utah’s daisy chain system of translators retransmits signals from several Salt Lake City stations, with each signal requiring its own input and output channels each time it is retransmitted. Officials from the Utah system told us that spectrum is so crowded at certain sites that they have had to make use of alternative technologies to avoid interference. They believe that FCC’s spectrum reallocation, as proposed, would “destroy” the state’s translator network. When FCC decided to adopt a digital transition deadline of 2015, rather than the originally proposed 2012 date, it acknowledged such concerns, noting that it would like to avoid requiring that stations make the significant investment required for conversion to digital facilities, when such facilities may have to be substantially modified because of channel displacement or taken off the air altogether in connection with the implementation of the spectrum reallocation. However, since there is no hard deadline for the spectrum reallocation, it could still occur after the digital transition of low-power television stations. In its order, FCC states that even if the reallocation is not concluded before the digital transition deadline, a 2015 deadline will permit low- power television stations to take specific reallocation proposals into account when finalizing their transition plans. Low-power television licensees and their representatives told us that it is difficult to secure digital transition financing because of the uncertainty created by the proposed spectrum reallocation. The majority of those with whom we spoke noted that many low-power licensees struggle financially and could face difficulties financing their stations’ transition to digital. Some stations have delayed their transition to digital because of concerns that their investment will be lost because of a lack of available channels or the need to spend additional funds to prevent interference with relocated full-power stations. As previously mentioned, Congress established NTIA’s Low-Power Television and Translator Upgrade Program to help fund rural low-power television stations’ transition to digital. As of June 2011, NTIA had reimbursed approximately $13 million of the available $44 million to roughly 1,000 low-power television stations for digital transition equipment costs. Some licensees have stated that they would not have been able to transition to digital without federal funds. However, NTIA’s program is a reimbursement program, and some licensees have noted that it is difficult to obtain financing for the up-front costs. The last day to apply for funds from NTIA’s program is July 2, 2012. FCC recommended that NTIA explore seeking an extension of the statutory deadline from Congress given the number of low-power television stations that will transition after the expiration of the program in 2012. FCC’s actions related to Class A stations: In its July 15 Report and Order, FCC established a process for Class A stations to transfer their protected status to their digital companion channel, which had previously posed challenges for some stations in their transition to digital. Prior to the order, Class A stations that transitioned to digital by flash cutting on their existing channel retained their Class A status since they were not changing channels. However, this was not the case for Class A stations that were using a digital companion channel to transition to digital. To keep their Class A status, these stations had to continue to broadcast on their existing analog channel, to which the Class A status was related. If the Class A station chose to turn off its analog signal without requesting special temporary authority from FCC to remain silent, then it lost its Class A protected status, making the station vulnerable to displacement by full-power stations or other primary users of spectrum. This led some Class A stations to delay completing their transition to digital, and some other stations lost their Class A status after transitioning to digital, as discussed below. When FCC established rules in 2004 for the digital transition of low-power television stations, it made it clear that it was not at that time providing Class A status to the digital companion channel of an analog Class A station. FCC stated that providing Class A status to these stations’ digital companion channels would complicate the digital transition of full-power stations, since full-power stations must protect Class A stations from interference. However, FCC stated that its intention was for Class A stations to retain their status on the channel they ultimately chose for digital operations, and that FCC would address the issue of how to permit Class A digital companion channels after the completion of the digital transition of full-power stations. Prior to establishing a process for Class A stations to transfer their status to a digital companion channel, FCC officials told us that Class A stations operating on a digital companion channel could request special temporary authority to remain silent on their analog facilities for up to 1 year, after which, by statute, the license expires. While such procedures are contained in FCC’s rules, FCC did not make any public statement directing such stations to enlist this procedure to retain their Class A status. According to FCC staff, a total of five Class A stations lost their Class A status after transitioning to digital and shutting off their analog signal. For example, a low-power PBS station in Pablo, Montana, that provides local programming to the Salish and Kootenai tribes, told us that it lost its Class A status when it transitioned to digital in 2009 and shut off its analog signal. FCC officials told us that these stations could apply to have their Class A status reinstated for their digital facilities, provided that they continued to comply with Class A eligibility requirements and that there would be no adverse effect on other stations. However, the stations may be unaware of this opportunity, as it is not explicitly stated in the July 15 Report and Order. FCC’s orders classifying the various types of low-power television stations noted that each service fulfilled a need for television broadcasting in underserved or unserved communities. Providing service to underserved communities could include providing television service in an area that had none, or providing specific groups with programming tailored to their needs (e.g., ethnic or religious programming), which was otherwise unavailable. In addition, FCC has highlighted how service to these communities has led to positive impacts on FCC’s goals of localism and diversity, including ownership by minorities and women. FCC’s 1956 order establishing a licensing process for translators noted that the translators were primarily intended to provide television to areas without service, but added that they could bring multiple services to communities too small to support several stations. FCC subsequently established a licensing process for LPTV stations in 1982, stating that LPTV stations could add to programming diversity and would be particularly suited to providing local programming. In subsequent policy statements, FCC has repeatedly cited LPTV stations’ positive impact on providing service to underserved communities and on FCC’s policy goals of localism and diversity. For example, in 1994, FCC stated that it established the LPTV service as a means of increasing diversity in television programming and station ownership, and noted that the hallmarks of LPTV stations are localism and niche programming. In CBPA, Congress also cited localism and diversity as goals. Specifically, it found that a small number of LPTV license holders had operated their stations in a manner beneficial to the public good by providing broadcasting that would not otherwise be available to their communities. Congress further found that it was in the public interest to promote diversity in television programming, for example, the programming provided by LPTV stations to foreign-language communities, and directed FCC to establish a process to provide certain LPTV stations with interference protection equivalent to that afforded to full-power stations (i.e., primary status). This led to FCC’s 2000 order implementing CBPA and allowing low-power stations to apply for Class A status, which noted LPTV stations’ contribution of locally originated programming to underserved communities and niche programming for specific groups, and also stated that LPTV service significantly increased the diversity of broadcast station ownership by providing first-time station ownership opportunities for minorities and women. FCC concluded that acting to improve the commercial viability of such LPTV stations was consistent with FCC’s fundamental goals of ensuring localism and diversity in television broadcasting. More recently, FCC’s 2009 Annual Performance Report noted that low-power television stations are an important source of local community information, and FCC’s 2010 notice on the digital transition of low-power television stations emphasized that FCC seeks to ensure the continued viability of low- power television stations that offer important services to specializ minority audiences, foreign-language communities, and rura l areas. Although FCC’s low-power television goals—meeting the needs of underserved communities, and contributing to localism and diversity—are well documented, FCC has not collected data to evaluate the extent to which the stations fulfill unmet needs or contribute to meeting FCC’s policy goals. FCC’s decisions regarding the reallocation of broadcast spectrum and its implementation of the digital transition of low-power television stations will affect the continued operation of some low-power television stations. However, FCC’s ability to weigh the effects of its decisions on low-power television stations, the communities they serve, and FCC’s goals of localism and diversity against the increasing need for wireless broadband spectrum could be limited by a lack of data. In addition, external data on these issues are limited; the trade association for Class A and LPTV stations has disbanded, and several consumer groups we contacted stated that they were not focusing on low-power television stations. We have noted the importance of collecting and analyzing data as a means to evaluate progress toward goals and inform agency decisions. Specifically, our publication Standards for Internal Control in the Federal Government states that management should ensure that there are adequate means of obtaining information from external stakeholders that may have a significant impact on the agency’s achieving its goals. In addition, our Internal Control Management and Evaluation Tool notes the need to obtain and provide to managers any relevant external information that may affect the achievement of the agency’s missions, goals, and objectives, particularly information related to legislative or regulatory developments and political or economic changes. FCC is not able to determine the extent to which low-power television stations provide local programming and meet the programming needs of underserved communities. FCC requires all full-power and Class A stations to file children’s programming reports; beyond this, FCC does not collect data on the types of programming that full- or low-power television stations provide. In 2008, FCC issued an order that would have required full-power and Class A broadcasters to file a standardized form with FCC describing the broadcaster’s programming, including local programming and programming for underserved communities. FCC noted that this would help clarify what broadcasters are doing to serve the public interest and allow FCC to monitor trends in the broadcasting industry. However, this requirement was not implemented because of legal challenges, and FCC officials told us they are working to address industry opposition. As a result, FCC’s ability to determine the overall community impact of the stations, including whether the stations are serving underserved communities by providing local or foreign-language programming, is limited. Some stakeholders have suggested that FCC use the revised programming form to collect and analyze data on how broadcasters were serving the public interest and weigh the loss of broadcast service against the benefits from reallocating spectrum to wireless broadband. FCC’s lack of data may affect its ability to provide Congress with information regarding whether additional Class A stations would help FCC meet its broadcast localism goals. LPTV stations have not had an opportunity to apply for Class A status since the onetime filing opportunity in 2000. In 2008, FCC noted that it tentatively concluded that it should allow additional qualified LPTV stations to be granted Class A status. It stated that increasing the number of Class A stations would ensure the existence of continued community programming and the availability of Class A status would provide investment protection for LPTV stations looking to make investments in the digital transition. FCC sought comments on its statutory authority to create additional Class A stations and how to define eligibility, but has not issued an order deciding whether to create additional Class A stations and may need legislative guidance from Congress on whether additional stations can apply for Class A status after the original window of eligibility established by CBPA. It is possible that some LPTV stations are fulfilling the requirements of Class A stations by providing local programming without Class A protection, but the extent to which this is the case cannot be determined without programming data. As part of the 2009 digital television transition of full-power stations, FCC did use some of the technical data it collects from low-power licensees to create an internal document for a commissioner that identified areas in which the only source of over-the-air broadcasting is a low-power television station. However, FCC does not know the number of stations that have ceased broadcasting without FCC’s permission, some of whom may be holding their license for speculative purposes. Low-power licensees and their representatives told us that some low-power construction permits and licenses are being obtained by “spectrum squatters” that hold on to the permit or license in hope of selling it to an interested party. FCC officials acknowledged that some licensees only broadcast the minimal amount of time needed to maintain their license and are simply holding the license in an attempt to sell it. They also noted that as long as applicants comply with FCC’s rules, FCC cannot act against a station that may be obtaining a construction permit or license for speculative reasons. Additionally, FCC’s system for storing construction permit and license applications does not automatically cancel expired licenses and construction permits, which has led to expired licenses and construction permits temporarily remaining in the system. FCC officials told us that FCC keeps track of stations that report being silent, but it does not have the resources to do the extensive field testing necessary to identify which stations have gone silent without notifying FCC. They also stated that FCC is working on ways to find these stations without extensive field testing and periodically checks for expired licenses and construction permits in order to cancel them and update their status in the system. In addition to lacking data about the contributions of low-power television stations to FCC’s goals of localism and diversity, FCC has never formally evaluated the extent to which low-power television stations actually affect these goals. In initiating the 2010 quadrennial review of its broadcast ownership rules, FCC does not mention low-power television stations when discussing the policy goals of localism and diversity; however, it did include low-power licensees as panelists on some of its ownership workshops. Similarly, in June 2011, an FCC working group released a white paper on the media and the information needs of communities that described the various types of low-power television stations, but did not assess their impact on communities and FCC’s goals of localism and diversity. FCC officials told us that they have not formally evaluated low- power television stations’ impact on localism and diversity because low power television stations are not subject to programming requirements (with the exception of local programming requirements for Class A stations) and are not considered in FCC’s multiple ownership rules and policies. However, given FCC’s efforts to reallocate spectrum, FCC’s ability to determine the public benefit derived from spectrum allocations to low-power broadcasters would be enhanced by information on the impact of low-power stations on communities and FCC’s goals. In addition, we have previously identified weaknesses in FCC’s collection of data on minority- and women-owned stations, including the lack of an FCC requirement that low-power television stations file such information. FCC recently began collecting ownership information from Class A and LPTV stations, with the first submission due July 8, 2010, but, according to FCC, the response rate was low. FCC officials told us they were sending letters to licensees in an attempt to increase the response rate for the 2011 filing. FCC officials stated that they hoped the data would provide a baseline that they could eventually use to evaluate overall trends in female and minority broadcast ownership, including LPTV and Class A station ownership. Low-power television stations use highly valued radio frequency spectrum to transmit programming, and the demand for such spectrum continues to increase as the United States experiences significant growth in commercial wireless broadband services. Since additional spectrum capacity will be needed to accommodate future growth, transitioning low- power television stations from analog to digital would aid FCC’s current efforts to identify spectrum that could be made available for broadband services. FCC has repeatedly noted the benefits of low-power television stations in serving communities, such as providing programming that would not otherwise be available, and expanding ownership opportunities for minorities and women. However, FCC has not taken steps to collect information that would inform its understanding of the impact of low-power television service on communities—whether these stations are reaching underserved communities; aiding FCC’s policy goals of localism and diversity; or, as in the case of speculative licenses and those stations that have gone silent, providing no community benefit. In addition, it is possible that the three types of low-power television stations are affecting communities differently—for example, a translator may be the only source of free over-the-air network television for some communities, while a Class A station may be the only source of foreign-language programming—however, FCC does not have the data to determine if this is the case. Lacking such information, FCC does not know the public benefit of low-power television stations’ receiving spectrum for television broadcast. Given that spectrum is a valuable and scarce natural resource and initiatives are under way in the federal government to identify spectrum that can be repurposed for broadband services, a thorough understanding of the community benefits derived from low-power station licenses could prove very valuable. Especially as FCC makes important decisions related to spectrum allocations, such information could enable FCC to weigh the potential loss of low-power television service against the benefits of reallocating spectrum to broadband services. With respect to Class A stations, Congress previously determined that such stations had operated in a manner beneficial to the public good by providing broadcasting to their communities that would not otherwise be available, and instructed FCC to allow low-power television stations in operation at the time of CBPA to apply for protected status. However, it is possible that some low-power television stations currently provide programming commensurate with that of Class A stations, but do not have protected status because they were not in operation during the statutorily provided onetime opportunity to apply for such status. FCC sought comments on its statutory authority to create additional Class A stations and how to define eligibility, but it may need legislative guidance from Congress on this issue. Whether FCC concludes that it has statutory authority, or needs Congress to revise CBPA first, eligibility of additional stations to seek Class A status needs to be resolved. The Federal Communications Commission should take the following two actions:  Explore options for assessing how the three types of low-power television stations have affected the communities they serve and have contributed to FCC’s policy goals of localism and diversity. Such an assessment could include evaluating what existing data FCC could use and what additional data should be collected to inform such an assessment.  Work with Congress, as necessary, to determine what the long-term role of Class A stations should be, whether additional low-power television stations should be permitted to apply for Class A status, and what criteria stations must meet to qualify for such status. Such criteria could include attributes that contribute to FCC’s goals of serving underserved communities and enhancing localism and diversity, such as providing locally produced programming and programming otherwise unavailable to communities. We provided a draft of this report to FCC for its review and comment. In response, FCC provided technical comments, which we incorporated as appropriate, and written comments, which are reprinted in appendix II. In its written comments, FCC did not agree or disagree with our recommendations but discussed planned and ongoing actions to address them. In particular, in response to our recommendation to explore options for assessing how low-power television stations have affected the communities they serve and have contributed to FCC’s policy goals of localism and diversity, FCC stated that it will ask its Federal Advisory Committee on Diversity in Communications in the Digital Age to address this issue. Regarding our recommendation that FCC work with Congress, as necessary, to determine what the long-term role of Class A stations should be, whether additional low-power television stations should be permitted to apply for Class A status, and what criteria stations must meet to qualify for such status, FCC stated that it plans to analyze the data from Class A stations’ children’s programming reports to determine the stations’ measures to provide educational and informational children’s programming. While this is a useful first step, additional work may be needed to provide Congress with the information it needs to make decisions regarding whether other stations should be allowed to apply for Class A status and what criteria such stations must meet. Overall, FCC stated that its spectrum priorities have changed in response to a growing demand for wireless broadband services, and it is examining the role of low-power television stations in providing over-the-air service to rural and underserved communities as it is considering incentive auction and channel-sharing initiatives to free up spectrum for wireless broadband. FCC also commented that because of broadcasters’ free speech rights, FCC is limited in its ability to evaluate the programming choices made by low-power television stations. FCC added that, with the exception of Class A stations, low-power television stations operate with secondary interference protection and are not subject to the programming or operational obligations of full-power television stations. FCC further noted that since many low-power television stations are translators, FCC has not found the need to collect extensive programming data from low-power television stations. While we understand the need to respect broadcasters’ free speech rights, we believe that FCC should collect data to better understand the extent to which low-power television stations address community needs and contribute to FCC’s goals of localism and diversity. In addition, FCC could collect data beyond programming information, such as whether a low-power television station is the sole source of emergency information for a community. We are sending copies of this report to the Chairman of the Federal Communications Commission and appropriate congressional committees. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine where low-power television stations are located, we pulled data from the Federal Communication Commission’s (FCC) Consolidated Database System (CDBS) on the coordinates of low-power television stations. We crossed the coordinates against the communities served by the stations, as a check on the data. In addition, FCC officials told us they cross-check coordinates data against existing data for stations located on towers registered to the Federal Aviation Administration (FAA), and we determined that over 36 percent of stations are on such towers. To determine the status of low-power television stations’ transition to digital, we pulled data from the CDBS to first identify all existing digital and analog low-power television stations. Then, we identified the number of existing analog low-power television stations that requested or received a digital flash cut or digital companion channel construction permit, requested or received a digital displacement for an existing analog station, or  are operating a digital companion channel. We removed construction permits that had expired. We did not remove licenses with past expiration dates if FCC considers them active, as indicated by a facility status of “licensed” or “licensed and silent.” Active licenses with past expiration dates represented less than 5 percent of the total active licenses in CDBS, and some may have renewal applications or other actions pending. We then assigned stations to the following categories:  digital transition completed (all licensed digital low-power television stations that are not a companion channel for a licensed analog channel); licensed digital companion channel;  analog stations granted a digital construction permit for flash cut, companion channel, or displacement;  analog stations that had applied for a digital construction permit for flash cut, companion channel, or displacement; and  analog stations that have taken no action (none of the above). To determine the number of stations that had taken no action to transition to digital, we identified the number of analog stations that (1) had not requested or received a digital construction permit (or had received such a permit but it had expired), and (2) were not operating a digital companion channel. To determine the reliability of data pulled from CDBS, we reviewed FCC user guides and forms for the system, and interviewed knowledgeable FCC officials regarding data entry and analysis procedures. In addition to receiving tables from FCC, we created tables from FCC’s raw data to determine the low-power television stations’ status in transitioning to digital, and the location of facilities. We compared FCC’s tables against our own, and we examined data runs for duplicates and other inconsistencies. Finally, we interviewed selected low-power licensees and asked them to verify FCC’s data regarding their status in transitioning, and asked them for their general impressions regarding the accuracy of FCC’s data. We note that applicants for a construction permit, displacement, or license from FCC enter the data regarding the location of their station, although as previously mentioned, FCC does check the data against existing data for stations on FAA-registered towers. FCC’s system automatically captures applications for station permits and licenses—necessary steps in transitioning from analog to digital—and we cross-checked stations’ tower coordinates against the community of city and state served by the station. We determined the data were reliable for our purposes. When discussing the number of stations in the report, we note that while FCC’s rules require licensees to notify FCC when their station is silent (not broadcasting) for more than 10 days, there may be some licensed stations that are not actively broadcasting without notifying FCC. However, these stations are holding licenses for particular pieces of spectrum; therefore, we are including them in our station counts. Therefore, when we describe numbers of stations in the report, the word “stations” includes actively broadcasting stations and other stations that may not be actively broadcasting, but which have licenses to broadcast. To identify the steps FCC has taken to transition low-power television stations to digital, and any challenges low-power television stations are facing transitioning to digital, we interviewed FCC officials and reviewed FCC’s orders and notices of proposed rulemaking relating to the digital transition of low-power television stations and the proposed reallocation of broadcast spectrum for wireless broadband, as well as comments submitted in response to FCC’s requests for comments on these issues. In addition, we reviewed the National Broadband Plan and a related technical paper on the proposed spectrum reallocation, as well as documents regarding the proposed spectrum reallocation from an FCC- sponsored broadcast engineering forum and an FCC webinar with state broadcasting associations. We also interviewed representatives of 18 low-power licensees, which cumulatively hold licenses for approximately 838 low-power television stations. These licensees included owners of all three types of low-power television stations; owners of a large number of stations and owners of a small number of stations; owners providing foreign-language, religious, or local programming; and municipalities that own low-power television stations. Further, we also interviewed legal counsel for some low-power television stations and representatives from the National Translator Association, Spectrum Evolution, Association for Maximum Service Television, Association of Public Television Stations, Public Broadcasting Service, and the League of United Latin American Citizens. We received written responses to questions we submitted to the Minority Media Telecommunications Council. We interviewed officials from the National Telecommunications and Information Administration (NTIA) and the United States Department of Agriculture (USDA) regarding the types of low-power television stations that apply to their programs for funding to transition to digital, and the challenges they face. In addition, we reviewed documents and data from NTIA and USDA to determine the amount of federal funds used to aid low-power television stations’ transition to digital. To obtain information on why low-power television stations were established, we reviewed FCC’s 1956 order establishing a licensing process for translators; the Notice of Inquiry, staff report, and resulting 1982 order establishing a licensing process for nontranslator, non-Class A low-power television stations; and the Community Broadcasters Protection Act of 1999 and resulting FCC implementation order creating Class A stations. In addition, we reviewed contemporary FCC documents for language regarding the purpose and benefits of low-power television. To determine the extent to which FCC is tracking whether low-power television stations are meeting their statutory and policy goals, we interviewed FCC officials and reviewed relevant documents to identify the types of information FCC collects, how it has used such data in the past, and its current plans for using the data. In addition, we reviewed comments submitted by low-power licensees and stakeholder groups regarding FCC’s data on low-power television stations, and we interviewed low-power licensees and their representatives to get their perspectives on whether FCC has the data it needs to evaluate the extent to which low-power television stations are meeting their statutory and policy goals. We contacted a number of consumer groups to discuss low- power television stations’ impacts on communities, but the majority did not respond or stated they were not working on the issue. We conducted this performance audit from October 2010 to September 2011 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Sally Moino, Assistant Director; Cheron Green; Brian Hartman; Crystal Huggins; Bert Japikse; John Mingus; Josh Ormond; Amy Rosewarne; Andrew Stavisky; and Hai Tran made key contributions to this report.
Television stations that broadcast at lower power levels were not required to meet the 2009 digital transition deadline for full-power stations. These low-power television stations transmit over a smaller area, and most are less regulated than full-power stations. Low-power television stations use valuable radio frequency spectrum, and the Federal Communications Commission (FCC) noted the stations' digital transition could aid its efforts to clear spectrum for wireless broadband. GAO examined (1) low-power television stations' location and status in transitioning to digital, (2) FCC's steps to transition low-power television stations to digital and whether these stations are facing challenges transitioning to digital, and (3) why low-power television stations were established and the extent to which FCC collects information to determine if low-power television service is meeting FCC's statutory and policy goals. GAO analyzed FCC data and documents, reviewed stakeholder comments, and interviewed agency officials, stakeholders, and low-power television licensees. Thousands of over-the-air low-power television stations serve communities across the United States in both urban and rural areas, and about 60 percent of all such stations have either completed the digital transition or have taken steps to transition. Over half of all low-power television stations are known as translators, which retransmit major network and other stations' programming in areas that cannot receive the signals from a primary station, generally in rural and mountainous areas. The remaining stations include low-power television stations known as LPTV stations and Class A stations. Class A stations have a special status that gives them greater interference protection than translator and LPTV stations and requires them to broadcast a minimum amount of locally produced programming. Some LPTV and Class A stations serve niche or local audiences with ethnic, religious, or other programming. In July 2011, FCC issued an order that established a deadline of September 1, 2015, for low-power television stations to cease analog broadcasts, but stations may still face challenges in making the transition to digital because of regulatory uncertainty. Specifically, an FCC proposal to reallocate spectrum from broadcasting to wireless broadband created regulatory uncertainty and difficulty for stations attempting to justify investing in transitioning to digital. Such a reallocation would leave fewer channels for television broadcasts and could make it difficult for low-power stations to find an available channel that does not interfere with other stations. FCC's order noted these concerns when adopting the 2015 deadline, rather than a previously proposed deadline of 2012, but it is currently unknown whether the uncertainty posed by the spectrum reallocation will be resolved prior to 2015. FCC's order adopted other measures, such as establishing a process for Class A stations to transfer their status to their new digital channels. Previously, without such a process, some stations delayed completing their transition to digital and others lost their Class A status after they transitioned to digital and ceased analog operation. According to FCC officials, such stations can apply to regain Class A status; however, stations may be unaware of this option as it is not explicit in the order. Low-power television stations were established to reach underserved communities; FCC has noted that the stations can positively affect FCC's goals of localism and diversity. However, FCC has not collected data to evaluate the extent to which these stations fulfill unmet community needs or contribute to meeting FCC's policy goals. Specifically, FCC does not collect programming data, is limited in its ability to identify stations that are not broadcasting, and has not evaluated low-power stations' impact in assessments of the information needs of communities. Lacking such information, FCC does not know the public benefit of stations and is limited in its ability to weigh the effects of its decisions on low-power television stations against the increasing need for spectrum for broadband services. Furthermore, although FCC proposed allowing additional stations to apply for Class A status as a means to preserve community programming, it has not issued an order and may need legislative guidance to determine the future of Class A status. FCC should (1) explore options for assessing the impact of low-power stations on the communities served and on FCC's goals, and (2) work with Congress as necessary to determine what the long-term role of Class A stations should be, whether additional stations should be permitted to apply for Class A status, and what criteria stations must meet to qualify for such status. FCC stated it is taking actions to address GAO's recommendations, and provided technical comments that were incorporated as appropriate.
The federal government’s framework for preventing, detecting, and prosecuting money laundering has expanded over the course of more than 30 years. With the passage of the Bank Secrecy Act in 1970, for the first time financial institutions were required to maintain records and reports determined to be useful to financial regulators and law enforcement agencies in criminal, tax, and regulatory matters. BSA has three main objectives: create an investigative audit trail through regulatory reporting standards; impose civil and criminal penalties for noncompliance; and improve the detection of criminal, tax, and regulatory violations. The reporting system first implemented under BSA was insufficient to combat underlying money laundering activity. For example, before 1986, BSA did not contain sanctions for money laundering, although it did contain sanctions for failing to file reports or for doing so untruthfully. To strengthen federal AML initiatives, Congress enacted the Money Laundering Control Act of 1986. In addition to imposing criminal liability for money laundering violations, the act directed each federal banking regulator to require that insured depository institutions establish and maintain a program that would ensure and monitor compliance with the recording-keeping and reporting requirements of BSA. The Annunzio-Wylie Anti-Money Laundering Act of 1992 amended BSA and authorized Treasury to require financial institutions to report any suspicious transaction relevant to a possible violation of a law or regulation. It authorized Treasury to require financial institutions to carry out AML programs and, together with the Federal Reserve, to promulgate record-keeping rules relating to funds transfer transactions. The act also made the operation of an unlicensed, money-transmitting business illegal under state law a crime. In 1994, the Secretary of the Treasury delegated overall authority for enforcement of, and compliance with, BSA and its implementing regulations to the Director of FinCEN. FinCEN was established within Treasury in 1990 initially to support law enforcement by providing a government-wide financial intelligence and analysis network, and became a bureau in 2001. Among its current responsibilities, FinCEN issues regulations; collects, analyzes, and maintains BSA-related reports and information filed by financial institutions; makes those reports available to law enforcement and regulators; and tries to ensure financial institution compliance through enforcement actions. According to its strategic plan, FinCEN seeks to ensure the effectiveness of the BSA regulatory framework and facilitate interagency collaboration. FinCEN’s RPPD is responsible for BSA regulatory, compliance, and enforcement functions. In August 2004, FinCEN created an Office of Compliance in RPPD to oversee and work with the federal financial regulators on BSA examination and compliance matters. The most recent expansion of BSA legislation occurred in October 2001 with enactment of the USA PATRIOT Act. Among other things, the act required an entity defined in BSA as a “financial institution” to have an AML program. Each program must incorporate: (1) written AML compliance internal policies, procedures, and internal controls; (2) an independent review; (3) a designated compliance person to coordinate and monitor day-to-day compliance; and (4) training for appropriate personnel. Entities not previously required under BSA to have such a program, such as mutual funds, broker-dealers, MSBs, certain futures brokers, and insurance companies, were required to do so under this act. Moreover, the act mandated that Treasury issue regulations requiring registered securities brokers-dealers to file SARs and provided Treasury with authority to prescribe regulations requiring certain futures firms to submit SARs. Among its other provisions, the act required that Treasury issue regulations setting forth minimum standards for financial institutions regarding verifying the identity of customers who open accounts. The USA PATRIOT Act also required that financial institutions establish due diligence and, in some cases, enhanced due diligence policies designed to detect and report instances of money laundering through private banking and correspondent accounts of non-United States persons; conduct enhanced scrutiny of private banking accounts maintained by or on behalf of foreign political figures or their families; and share information relating to money laundering and terrorism with law enforcement authorities, regulatory authorities, and financial institutions. In addition, nonfinancial institutions also became subject to BSA currency transaction reporting (CTR) requirements where, in the course of trade or business, the business receives more than $10,000 in coins or currency in one transaction (or two or more related transactions). The objectives of U.S. financial services regulation are pursued by a complex combination of federal and state government agencies and SROs. Generally, regulators specialize in the oversight of financial institutions in the various financial services sectors, which stem largely from the laws that established these agencies and defined their missions. Under the BSA regulatory scheme, FinCEN is responsible for the overall administration and enforcement of BSA and may take enforcement actions, but federal and state regulators and SROs conduct day-to-day compliance and enforcement activities. Specifically, with respect to examinations for BSA compliance, FinCEN delegated its BSA examination authority to the federal banking regulators, SEC, CFTC, and IRS. The federal banking regulators, SEC, and CFTC also use their independent authorities to examine entities under their supervision for compliance with applicable BSA/AML requirements and regulations. FinCEN has retained enforcement authority and may impose civil penalties for violations. In addition, each of the federal bank regulators also may impose civil money penalties for significant BSA violations, and have specific authority to initiate cease and desist proceedings against the entities they supervise for BSA/AML violations. SEC, CFTC, and their SROs also have authority to enforce their rules requiring BSA/AML compliance; and IRS has very limited enforcement authority delegated by FinCEN. Justice prosecutes criminal violations of BSA, and several federal law enforcement agencies can conduct BSA-related criminal investigations. As noted previously, in 1994, the Secretary of the Treasury delegated overall authority for compliance and enforcement of BSA and its implementing regulations to the Director of FinCEN. Over the years, as more financial activities and types of institutions became involved in the BSA, Treasury delegated BSA examination authority to the federal banking regulators; and to SEC, CFTC, and their SROs. Figure 1 shows the federal agencies and SROs involved in examining for compliance with BSA. Table 1 summarizes the types and numbers of institutions the federal agencies examine for BSA/AML compliance, and which agency or SRO conducts these examinations. FinCEN retains BSA enforcement authority and may take enforcement actions independently of, or concurrently with, other regulators. FinCEN’s Office of Enforcement conducts independent investigations of BSA violations mostly based on referrals of BSA noncompliance from financial regulators. FinCEN has information-sharing MOUs with the federal banking regulators, SEC, CFTC (as of January 2009), IRS, and some states under which these agencies provide FinCEN information on significant BSA violations and deficiencies found during their examinations. Less frequently, FinCEN conducts investigations based on information from Justice and from its own in-house referrals identified through analysis of BSA data. If a FinCEN investigation results in a decision to take an enforcement action, FinCEN may issue a civil money penalty, depending on the severity of the violation. FinCEN and the financial regulators also try to coordinate enforcement actions. (We discuss coordination of enforcement actions in more detail later in this report.) Independent of Treasury-delegated authorities, the federal banking regulators have general authorities under the federal banking laws to conduct compliance examinations and take enforcement actions against institutions for violations of any applicable law, including BSA. The Federal Deposit Insurance Act specifically provides that the Federal Reserve, FDIC, OCC, and OTS are to prescribe regulations requiring the institutions they supervise to maintain procedures for compliance with BSA requirements and to conduct examinations of those institutions for compliance with reporting and AML provisions of BSA. The Federal Credit Union Act contains the same requirement for NCUA. Federal banking regulators examine whether depository institutions under their supervision are in compliance with BSA/AML requirements concurrently with their examinations for the entities’ overall safety and soundness. Depository institutions can generally determine their regulators by choosing a particular kind of charter—for example, commercial bank, thrift, or credit union—which may be obtained at the state level or the national level. While state regulators charter institutions and participate in oversight of those institutions, all of these institutions have a primary federal regulator if they have federal deposit insurance. The Federal Reserve, FDIC, OTS, and NCUA alternate or conduct joint safety and soundness examinations—including a BSA/AML component—with state regulators, generally using the same examination procedures (shown earlier in table 1). As recently as 2004, about one-third of state banking departments reported not examining for BSA compliance; however, they have taken a more active role in conducting these reviews more recently. FinCEN currently has information-sharing MOUs with 46 state agencies that conduct AML examinations. As with examinations, the Federal Reserve, FDIC, OCC, and OTS have authority under the Federal Deposit Insurance Act to take enforcement actions against institutions they supervise and related individuals when they determine that an institution or related individual has violated an applicable law or regulation. These agencies also have specific authority to initiate cease-and-desist proceedings for failure to establish and maintain BSA compliance procedures. NCUA also can take enforcement actions under its legislative authorities. Furthermore, state agencies have authority to take enforcement actions against institutions chartered within their state that are in violation of banking legislation. SEC and CFTC are regulatory agencies with missions that focus on protecting investors, preventing fraud and manipulation, and promoting fair, orderly markets, but the regulatory frameworks for the securities and futures industries are structured differently than those for depository institutions. Consistent with this framework, SEC and CFTC regulate their industries in part through oversight of SROs. SEC and CFTC have authority under the Securities Exchange Act and the Commodity Exchange Act, respectively, to inspect the books and records of firms that they supervise. SEC, CFTC, and their SROs have adopted rules for compliance with BSA/AML requirements. More specifically, SEC’s Office of Compliance Inspections and Examination (OCIE) shares BSA examination responsibilities with securities SROs, which have statutory responsibilities to regulate their own members. The Financial Industry Regulatory Authority (FINRA) provides oversight of the majority of broker-dealers in the securities industry. Other securities self-regulatory organizations include the Chicago Board Options Exchange and Philadelphia Stock Exchange. OCIE and the SROs both conduct BSA/AML examinations for broker- dealers, but only OCIE conducts routine examinations of registered investment advisors and their affiliated mutual funds for BSA compliance as they are not members of an SRO. CFTC officials said that CFTC does not routinely conduct direct examinations of the firms it supervises; instead, CFTC oversees the examinations conducted by its SROs—the National Futures Association (NFA),which conducts most of the audits, the Chicago Mercantile Exchange, the New York Mercantile Exchange, the Chicago Board of Trade, and the Kansas City Board of Trade. The SROs monitor for compliance with BSA/AML and with their own rules, which include BSA/AML obligations. SEC and CFTC ultimately are responsible for enforcing compliance with their rules and regulations and can institute enforcement actions against firms within their jurisdiction that appear to be in violation of those agencies’ BSA-related rules. However, because the SROs overseen by SEC and CFTC have rules requiring compliance with applicable laws and regulations, they typically have front-line responsibility for instituting BSA- related enforcement actions and generally inform SEC and CFTC of such actions. The securities and futures SROs have authority to enforce each of their respective BSA/AML-based rules against their members—generally, broker-dealers and futures firms. They take their own enforcement actions against their members which may include suspending, expelling, fining, or otherwise sanctioning member firms (and their associated persons). While IRS performs a regulatory function with regard to nonbank financial institutions (NBFI), IRS generally is not considered a “regulator”; it is a bureau within Treasury whose mission is to assist taxpayers in understanding and meeting their tax responsibilities. Unlike the other federal agencies with regulatory functions, IRS does not have independent authority to conduct BSA examinations. Rather, under delegation of examination authority from FinCEN, IRS examines any financial institution not subject to BSA examination by the federal financial regulators. Thus, institutions that IRS examines include MSBs; casinos and card clubs; dealers of precious metals, stones, and jewels; and certain insurance companies. IRS’s Small Business/Self-Employed Division, which reports directly to the Deputy Commissioner for Services and Enforcement, conducts BSA compliance examinations of these types of NBFIs. In 2004, IRS created the Office of BSA/Fraud within the division to focus on BSA examinations of NBFIs. As some NBFIs are state-chartered institutions, such as MSBs, IRS also has information-sharing MOUs with many state agencies to facilitate cooperation on examinations. FinCEN did not delegate to IRS authority to enforce BSA requirements, except for foreign accounts, and IRS does not have independent authority to enforce BSA requirements. IRS can issue a letter of noncompliance and make suggestions for corrective action to institutions it examines for BSA compliance. If significant BSA violations or deficiencies were found or if an institution refused to take corrective action, IRS would refer the case to FinCEN to determine what type, if any, of enforcement action might be appropriate. IRS examiners also may refer cases to their Criminal Investigation unit, if the examiners believe that a willful criminal violation may be involved. IRS Criminal Investigation, IRS’s enforcement arm, investigates individuals and businesses suspected of criminal violations of the Internal Revenue Code, money laundering and currency crime, and some BSA requirements. IRS Criminal Investigation investigates BSA criminal violations in conjunction with other tax violations. While Justice prosecutes criminal violations of the BSA, several federal law enforcement agencies in Justice and the Department of Homeland Security can be involved in the detection and investigation of criminal BSA activity. More specifically, Justice investigates individuals and financial institutions that repeatedly and systemically do not comply with BSA regulations or are involved in criminal money laundering offenses and prosecutes those charged. Referrals to Justice from financial regulators of suspected cases of criminal BSA/AML violations also may trigger a Justice investigation. In addition to prosecutions, Justice has resolved criminal investigations through deferred or nonprosecution agreements and guilty plea agreements, which have included fines, forfeitures, remedial actions, and timelines for implementation. Within the Department of Homeland Security, the Secret Service, Immigration and Customs Enforcement, and Customs and Border Protection all use BSA data in their investigations. According to Justice officials, most criminal BSA cases against financial institutions start as investigations of individuals involved in illegal activities, such as drug trafficking or money laundering. Financial regulators have incorporated their BSA/AML responsibilities into their supervisory approaches to compliance and enforcement, but opportunities exist for improved coordination. Federal banking regulators and industry representatives report that their interagency public BSA examination manual increased collaboration on bank examinations. SEC and CFTC have formalized their BSA/AML examination procedures in nonpublic BSA examination modules and coordinate with their SROs on examination issues. IRS developed an MSB examination manual and an overall strategy for NBFI identification and examination with FinCEN, but has not fully coordinated its MSB examination schedules with states, missing opportunities to leverage limited resources. Further, across financial industries, agencies have not established a formal mechanism through which they could discuss compliance processes and trends without industry present. The regulators with enforcement authority issued BSA-related enforcement actions in 2008, and the federal banking regulators improved coordination of their enforcement actions. Officials from the federal banking regulators reported improved transparency and consistency of enforcement actions, due in part to new interagency guidance. In 2005, the federal banking regulators, in collaboration with FinCEN, combined their BSA guidance with examination procedures and made both publicly available in one manual. Since 1986, the federal banking regulators have been required to ensure that institutions under their supervision have AML programs. SEC and CFTC and their SROs use a different approach in regulating their industries—they keep their examination modules nonpublic, but provide public guidance to industry through various methods. With respect to BSA, these agencies and SROs also have coordinated and formalized their examination procedures since the 2001 USA PATRIOT Act required institutions under their supervision to have AML programs. IRS developed an examination manual with FinCEN for MSBs, but does not fully coordinate its examination schedules with state examiners. The financial regulators do not have a nonpublic forum for regularly discussing BSA examination procedures and findings across sectors. Through the development of an interagency BSA/AML examination manual, guidance, and inter- and intra-agency training, the banking regulators have increased collaboration on BSA examinations and the transparency of the examination process. In 2005, the federal banking regulators, in collaboration with FinCEN, published the Federal Financial Institutions Examination Council (FFIEC) BSA/AML Examination Manual, which was updated in 2006 and 2007. The manual provides an overview of BSA compliance program requirements and guidance on identifying and controlling money laundering and other illegal financial activities; presents risk management expectations and sound practices for industry; and identifies examination procedures. All federal and state banking regulators use this manual when conducting BSA/AML examinations, whether they are joint or independent examinations. As mentioned previously, the Federal Reserve, FDIC, and OTS will conduct (on an alternating basis) independent or joint examinations with state agencies. NCUA conducts examinations at all federally chartered credit unions, while state supervisory authorities conduct BSA examinations at all state-chartered credit unions. Depending upon the risks, NCUA may conduct joint examinations with the state authorities at the state-chartered credit unions. OCC supervises nationally chartered banks and federal branches of foreign banks and therefore does not share jurisdiction with state banking regulators. Both federal and state examiners said that the manual helped increase the consistency of examinations among the regulators. Federal banking regulators also generally share BSA/AML examination workpapers and findings with their state counterparts in cases where they share regulatory jurisdiction over an institution. For example, NCUA officials said that their findings are shared with states to coordinate their reports on joint examinations. State officials we interviewed concurred, stating that they share workpapers in cases where they have federal regulatory counterparts. Several industry officials we interviewed also thought that the federal banking regulators collaborated well with other federal banking regulators on their examinations. The new examination manual also has improved the consistency and transparency of examinations by providing a framework for examinations, requiring risk assessments and transaction testing, and providing publicly available examination procedures for banks. For example, the manual lists requirements for examination scoping and transaction testing. Officials from one state regulator said the manual has helped answer questions for institutions and regulators, and helped institutions structure their AML programs. All of the federal banking regulators and most of the state banking regulators and banking associations we interviewed consider the process of gathering data for banks and the risk-assessment component of the manual beneficial. As one regulator said, the manual helps an examiner understand an institution’s products and services and the steps the institution took to mitigate risks. Most industry officials we interviewed thought the manual provided more consistency to and clearer guidance about the examination process. While regulators and industry officials said that the manual has been beneficial overall, some banking regulator and industry association officials said that initially it sometimes resulted in longer examinations or additional procedures. Federal Reserve examiners noted that it is important for examiners to apply the risk-based approach, using the minimum procedures where appropriate, and to utilize work previously done by a bank’s independent audit, where possible. Similarly, NCUA examiners added that initially the manual resulted in some expanded examinations. However, by using the risk-based approach they are able to focus their resources on the highest areas of risk. Federal Reserve officials added that as examiners have become more familiar with the manual since its adoption, the amount of background reading that examiners need to do in preparing for a BSA/AML examination has decreased. Some officials from the institutions we interviewed were less concerned with the length of the examinations than with some examiners interpreting the manual’s requirements too literally or having expectations beyond those expressed in the manual. For example, an official from one large bank said that when the manual was first implemented, regulators were examining “very close to the manual” and interpreted it literally instead of conducting their examinations based on risk. In another case, an official from one small bank that files very few SARs noted that in recent examinations, examiners unnecessarily focused on the bank’s record keeping and whether SAR reports were filed on time. FFIEC serves as the mechanism for the banking regulators to develop interagency BSA/AML guidance for examiners and the industry. FFIEC is also the forum in which banking regulators and FinCEN discuss and draft manual revisions. In addition to its role in developing the manual, the FFIEC BSA/AML Working Group is an interagency group through which the banking regulators develop joint examiner training, such as the AML Workshop and Advanced BSA/AML Specialists Conference. FinCEN officials said that FinCEN specialists also teach at these workshops. Both federal and state banking examiners participate in FFIEC AML workshops and other training sessions offered through their agencies or vendors. In interagency working groups, participants share their knowledge of and experiences with BSA, which federal banking regulator officials have said helped them work toward achieving consistency in their examination processes. Federal banking regulators also train examiners within their own agencies on the new manual. As a check on their examination programs, including their BSA/AML examination programs, the federal banking regulators conduct quality assurance reviews. The regulators’ quality assurance reviews that we examined, which were conducted from 2005 through 2008, indicated that banking examiners were implementing BSA/AML compliance appropriately, with some minor exceptions. For example, reviews from one regulator noted that examiner staff were well trained, devoted significant attention to BSA/AML issues, and generally had well-organized workpapers. Reviews from a second regulator found that examiners complied with BSA/AML guidance, quality control processes were satisfactory, processes for determining enforcement actions and making referrals to FinCEN were sufficient, SAR reviews were timely, and communication between the regulator’s headquarters and regions was strong. Another regulator concluded that its examiners demonstrated strong compliance with all issued national and regional guidance for BSA examinations, and found adequate internal controls, no material weaknesses in workpapers, and adequate supervisory and examination resources for evaluating BSA compliance. While reviews generally were positive, they also noted some weaknesses. One regulator recommended that a regional office develop a process for a quality assurance group to periodically review workpapers on a risk-focused basis because of the complexity of the FFIEC BSA/AML examination procedures and also expressed concern about turnover of qualified staff. A second regulator noted a lack of both independent testing and identification of high-risk accounts in one region, and inappropriate recording of a BSA violation in a second region. A third regulator found instances where reported BSA violations were not forwarded to the agency’s headquarters. SEC, CFTC, and their SROs share responsibility for oversight of the securities and futures industries, and have worked together to incorporate new BSA/AML requirements into their compliance programs. These agencies take a different approach than the federal banking regulators— they have separate, nonpublic procedures for their examiners and provide public guidance to industry. In 2006, SEC and what is now FINRA prepared a nonpublic examination module for broker-dealers in an effort to promote consistency in BSA/AML examinations. SEC staff said that the SEC-FINRA module generally formalized procedures and processes that SEC and its SROs already had in place. SEC staff added that their agency has procedures in place for granting access to nonpublic information in response to requests by other regulators. Furthermore, SEC provided all SRO broker-dealer examination modules and procedures to FinCEN for its review and input under their MOU. SEC also has a separate, nonpublic examination module for mutual funds, which it, rather than the SROs, examines. SEC staff explained that BSA/AML examinations of mutual funds are more complex than examinations of broker-dealers because mutual funds do not have their own employees and are managed by investment advisors. Registered investment advisors are rated according to the risk they manage, and those with a higher risk profile are examined more frequently. SEC annually completes approximately 100 mutual fund examinations covering BSA issues. Working through the Joint Audit Committee, the futures SROs developed a common, nonpublic BSA/AML examination module, which the futures SROs (except NFA) use in their BSA/AML examinations. The Joint Audit Committee updates the BSA module annually and submits the module to CFTC. Unlike SEC, CFTC had not provided the examination modules to FinCEN for its review because the agencies did not have an information- sharing MOU in place until January 2009. (We discuss MOUs in more detail later in this report.) However, CFTC and FinCEN officials informally have discussed procedures the futures SROs use during their BSA/AML examinations. In lieu of making examination modules public, SEC, CFTC, and their SROs offer public BSA guidance and education through various methods and venues, including the Internet and industry conferences. For example, SEC developed BSA “source tools” for broker-dealers and mutual funds, which compile key laws, rules, and guidance and provide regulatory contact information. The tools are available on SEC’s Web site. Securities SROs also provide training and update members on BSA/AML rules and guidance. In addition, FINRA has developed an AML program template for small firms on its Web site that provides possible language for procedures, instructions, and relevant rules and Web sites, among other information. Similarly, CFTC provides information on BSA/AML requirements on its Web site and participates in industry conference panels and outreach efforts with other regulators (in particular foreign regulators). In addition, futures SROs also may provide training, send members updates on new BSA/AML rules and guidance, and participate in industry conference panels to help educate institutions on BSA/AML. For example, NFA provides Web-based training and an AML questionnaire for futures commission merchants and introducing brokers. Overall, industry representatives have been complimentary about the information and education provided by SEC, CFTC, and their SROs; however, they still expressed a desire to have BSA/AML examination modules made public. SEC, CFTC, and their SROs also have coordinated on multiple-regulator and cross-industry examination issues because many institutions can be registered with more than one SRO or join more than one exchange. For example, broker-dealers can be members of more than one securities SRO. FINRA (which conducts almost 90 percent of broker-dealer examinations) meets with other securities SROs to coordinate examination schedules and ensure that all broker-dealers are covered by examinations. FINRA also has several regulatory agreements to conduct work on behalf of other SROs. In the futures industry, futures commission merchants must be members of NFA and may be clearing members of more than one contract market. Therefore, the Joint Audit Committee assigns an SRO to be the lead regulator, responsible for conducting examinations for each firm with multiple memberships. Examination reports and findings are shared among futures industry SROs where the firm is a member. Some of the largest SEC-registered broker-dealers also may be registered as futures commission merchants or introducing brokers on futures exchanges. In these instances, FINRA and futures SROs may coordinate informally on BSA/AML examinations of any futures firms that are registered dually as securities broker-dealers. As part of FINRA’s information-sharing agreement with NFA, the two SROs meet at least quarterly to share examination results and schedules. Other futures industry SROs obtain FINRA examination results on an as-needed basis. Futures SRO officials said that (1) if FINRA examined an institution’s AML program in the last 6 months and reported no major findings and (2) the institution used the same BSA officer and procedures for its securities and futures business, then SRO officials might refrain from conducting the full range of their examination activities. Finally, SEC, CFTC, and the securities and futures SROs participate in Intermarket Surveillance Group meetings. In addition to working together to help promote consistency in examinations, securities and futures regulators also have programs and procedures—similar to the quality assurance reviews of the federal banking regulators—to review examinations or specific issues. For instance, SEC staff told us that liaisons to each of SEC’s regional offices conduct a quarterly review of a representative sample of examinations reports that include AML findings. They added that SEC reviews the examination reports to ensure that AML findings are sufficiently supported and conclusions are valid. SEC staff conducts periodic inspections of FINRA’s overall BSA/AML examination program. The purpose of these inspections is to identify any systemic deficiencies or trends in FINRA’s BSA/AML program. During previous SEC inspections, SEC and FINRA staff said that SEC identified a few BSA/AML-related deficiencies in specific FINRA examinations. FINRA officials stated that while SEC found isolated weaknesses in some examinations, these findings did not indicate any significant trends. FINRA officials stated they use findings from SEC’s reviews to identify areas for additional training. Similar to SEC, CFTC conducts reviews of SROs’ examinations, in which CFTC staff review SRO examinations to ensure they are appropriately examining for compliance with futures laws, including BSA. CFTC officials told us that these reviews have not identified any problems with BSA/AML examination programs of the futures SROs. Although SEC, CFTC, and SRO officials cited coordination on BSA issues, industry officials at large financial companies with whom we spoke had mixed opinions on coordination among the securities and futures regulators. For example, one industry representative said that futures SROs and FINRA coordinated well and shared examination information. The representative also stated that the futures SRO would not conduct its own examination if its review of FINRA’s examination workpapers showed the FINRA to be work sufficient. However, another industry representative indicated that they had never seen FINRA and their futures SRO coordinate on BSA/AML examinations. Since our 2006 report, IRS has made improvements in its BSA/AML compliance program by revising guidance, identifying additional NBFIs, and coordinating with FinCEN and the states; however, IRS and state agencies have missed opportunities to better leverage examination resources by not coordinating their examination schedules. In response to a December 2006 GAO recommendation, IRS updated its Internal Revenue Manual to reflect changes in its BSA/AML program policies and procedures and distributed the revisions to IRS staff. In our 2006 report, we also said that IRS had identified only a portion of the NBFI population. In 2005, IRS’s database contained approximately 107,000 potential NBFIs; however, during the same year FinCEN estimated that there could be as many as 200,000 MSBs, the largest group of NBFIs subject to BSA requirements. Through subsequent coordination with FinCEN and state regulators and internal identification efforts, IRS significantly increased the number of identified MSBs. For example, at least three or four times a year, FinCEN sends IRS lists of anywhere from 100 to 300 potentially unregistered MSBs, which FinCEN identified by reviewing SARs from depository institutions that mention unregistered MSBs. Similarly, states that signed an MOU with IRS must provide IRS lists of state-licensed and registered MSBs on a quarterly basis. IRS officials said that the agency found about 20 percent of the new MSB locations as a result of information provided by with the FinCEN and states’, but that most of the newly identified MSBs were added due to internal identification efforts. According to IRS officials, in June 2008 the database contained more than 200,000 unique locations of MSBs. In our 2006 report, we recommended that FinCEN and IRS develop a documented and coordinated strategy that outlined priorities, time frames, and resources needs for better identifying and selecting NBFIs for examination. In response, IRS and FinCEN developed such a strategy. Furthermore, IRS, in concert with FinCEN and state regulators, has developed a BSA/AML examination manual for MSBs that was released in December 2008. The manual contains an overview of AML program requirements, discusses risks and risk-management expectations and sound practices for industry, and details examination procedures. The manual’s main goals are to enhance consistency across BSA examiners, promote efficient use of examination resources, and provide guidance to examiners and MSBs about the BSA examination process. In July and August 2008, IRS and two state regulators tested the feasibility of conducting joint examinations using the new MSB examination manual. Many factors complicate joint examinations—including varying state licensing requirements, coordination of examiner resources, the difficulties of sharing confidential information, and differing examination scope and focus. For instance, one state may require licensing of only money transmitters, while another state also might require check cashiers and currency exchangers to obtain a license. Nonetheless, some state regulators with whom we spoke expressed a desire to conduct joint or alternating examinations with IRS to better leverage state resources. One state regulator said that joint examinations would allow states to issue enforcement actions pursuant to their own state authority against institutions with AML violations since IRS lacks enforcement authority. According to the Money Transmitter Regulators Association, state financial regulators already conduct joint examinations with other states to leverage examination resources and expertise. IRS officials said they will review and incorporate examiner comments from the joint examination pilot and work with the Conference of State Banking Supervisors to develop formal guidance for IRS and state examiners. Additionally, IRS has increased the number of its information-sharing MOUs with state financial regulators from 34 in 2005 to 43 as of October 2008. Under the MOU, the state regulators are typically required to provide lists of state-licensed and chartered MSBs, examination reports, information concerning BSA noncompliance, and examination schedules on a quarterly basis to IRS. Also on a quarterly basis, IRS agreed to provide copies of all Letter 1112 (letters of noncompliance sent to institutions with BSA violations), copies of all Letter 1052 (notifications to new institutions of relevant BSA regulations), lists of MSBs in the state, and examination schedules to state financial regulators. According to the MOU, IRS officials and state regulators will meet periodically to review the implementation of the MOU. Following one state financial regulator comment on the usefulness of the information provided in the Letter 1112, IRS officials revised the form letter to include information on the type of institution examined and the activities conducted by that institution. According to IRS officials, many state agencies are not living up to their responsibilities as stated in the MOU. IRS data show that 28 of 43 state agencies that signed an information-sharing MOU have not provided IRS with MSB information and only 4 of 43 have provided examination schedules. In addition, state financial regulators that send MSB data to IRS do so using different formats, limiting the usefulness of the data for IRS. IRS is working with states to develop a standardized format for all state information, making it easier to provide the information to IRS and for IRS to integrate the information into its database. While IRS provides MSB information to state regulators, it has not shared its examination schedules with states, contrary to what it agreed to do as part of their MOUs. IRS officials said they provide state regulators with their annual workplans, which include the total number of NBFIs to be examined but not the names of the institutions to be examined. Therefore, the state financial regulators cannot plan their examinations to avoid potential overlap or coordinate joint examinations. One state agency noted that it had conducted examinations of MSBs, only to find out later that IRS had conducted its examinations not long before. Several state agencies said that greater coordination and sharing of examination schedules would help reduce redundancy in examination resources. Best practices in interagency coordination suggest agencies should assess their relative strengths and limitations, identify their mutual needs, and look for opportunities to leverage each others’ resources—thus obtaining additional benefits that would not be available if they were to work separately. IRS officials said state regulators would not derive much benefit from IRS providing examination schedules on a quarterly basis because new case files on institutions are sent to field managers often, sometimes weekly, and field managers and examiners have flexibility and discretion to determine their examination schedules. In addition, some institutions on IRS examination lists may not appear on a state regulator’s list because of varying state licensing and examination requirements of MSBs. However, by not implementing coordination of examination schedules with states, IRS may have missed opportunities to leverage resources, reduce regulatory duplication, maximize the number of MSBs to be examined, and better ensure BSA compliance with MSBs. While all federal agencies have made improvements in their BSA compliance efforts, they have not established a formal mechanism through which they collectively can discuss sensitive BSA examination processes and findings in nonpublic meetings. All federal agencies and some SROs participate in the Bank Secrecy Act Advisory Group (BSAAG)—a public- private working group headed by FinCEN that meets twice a year to discuss BSA administration. BSAAG also includes a number of subcommittees on various BSA/AML issues. Representatives from the SROs, industry, and law enforcement agencies are present at these meetings and on some subcommittees. Some regulatory officials have told us that the presence of industry representatives and the number of participants in BSAAG inhibit more detailed discussion on some issues. Further, sensitive information, such as examination processes and findings, cannot be discussed due to the presence of industry. Some federal agency officials said they have held discussions with regulators of other industries outside of BSAAG, but the discussions generally were held on an informal basis and were not inclusive of all federal agencies. Some banking regulators cited their public manual as a reason for not meeting outside of BSAAG with regulators of other industries. FDIC officials stated, outside of meetings with other federal banking regulators, they had met with several state MSB regulators to understand the MSB examination process and other state roles relating to MSBs. One of the primary goals of these meetings was to determine if they could share information about MSB examinations with some state regulators. SEC staff said they informally have had discussions on BSA/AML issues with federal bank regulators and CFTC. SEC and Federal Reserve staff cited frequent, informal communications between the agencies on BSA issues. Further, SEC and the Federal Reserve signed an MOU in July 2008 under which they can share information on common interests, which could include BSA violations. Under the MOU, if SEC or the Federal Reserve became aware of a significant violation occurring in an institution regulated by the other agency, they would notify the other agency and provide additional information if requested. CFTC officials said that outside of BSAAG, they generally discuss examination procedures only with SEC and FINRA. Similarly, IRS officials stated they have met with regulators on an ad hoc basis when there have been overlapping issues. FINRA officials told us that they had very useful meetings with the Federal Reserve on two occasions (in April and December 2008) during which they discussed BSA examination approaches and findings. These meetings will continue on a biannual basis. In addition, SEC and FINRA staff said that in November 2008, SEC and FINRA staff met with OCC and Federal Reserve staff to share general information about SEC and FINRA’s BSA/AML examination programs. While they did not discuss specific examination procedures, FINRA officials said they would be willing to do so if it were useful. Some industry officials expressed concern about examination overlap and suggested that if regulators collectively could discuss these issues, the collaboration could help decrease resources expended on responding to duplicative information requests and increase the consistency of examination processes. Many of the largest financial institutions are part of a bank or financial holding company structure—companies that could include broker-dealers and futures firms, as well as banks. Therefore, some financial institutions have multiple regulators from various institutions. Industry representatives said that large financial institutions employ enterprise-wide, risk-based AML programs that have many similar elements across business lines. As no single regulator examines BSA/AML procedures for all of the institution’s functions, in some cases they must work with several regulators to review the same or similar policies and procedures. In addition, some officials also mentioned that regulators sometimes arrived at different findings when looking at the same BSA processes. For example, one official stated that regulators of different industries reviewed a common AML procedure and arrived at different conclusions—one regulator approved a policy and another requested a wording change. According to our key practices for collaboration, agencies can enhance coordination of common missions by leveraging resources and establishing compatible procedures. To facilitate collaboration, agencies need to address the compatibility of standards, policies, and procedures— including examination guidance and its implementation. However, because banking-regulator and MSB examination guidance is public and SEC and CFTC guidance is nonpublic, the agencies cannot address these and other sensitive regulatory issues in the existing interagency forum, BSAAG. As a result, the regulators may not be able to gain the benefits of collaboration—leveraging scarce resources and building on the experiences and improvements of other agencies. Furthermore, by not having a mechanism that could provide an overview of examination efforts, regulators may be missing opportunities to (1) discuss BSA/AML concerns from the viewpoint of all financial industries being interconnected and (2) decrease the regulatory burden, where possible, for the institutions under examination by multiple regulators. The BSA/AML examinations that federal banking regulators, SEC, CFTC, and their SROs conducted resulted in the citation of violations and the taking of informal (in the case of the federal banking regulators) and formal enforcement actions. In our interviews, the federal banking regulators discussed factors potentially influencing BSA compliance in their industry and also reported improved interagency coordination on enforcement actions due, in part, to the issuance of new guidance. SEC and CFTC are kept apprised of enforcement actions that their SROs take through meetings and information-tracking efforts. In contrast, because it does not have the enforcement authority, IRS refers the BSA violations it finds to FinCEN, which takes an enforcement action, if appropriate. Justice pursues cases when it believes BSA noncompliance is criminal. The federal banking regulators have taken informal and formal enforcement actions against depository institutions to address BSA/AML concerns. The federal banking regulators can only take enforcement actions under their enabling legislation contained in Title 12 of the United States Code, but these actions can be based on an institution’s violation of BSA. Table 2 provides aggregate numbers of examinations, violations, and enforcement actions taken by the federal banking regulators. Under the regulators’ AML program rules, in 2008 the most frequently occurring violations concern requirements to independently test an institution’s BSA/AML compliance program, train staff on BSA/AML, and maintain internal controls. BSA requires that depository institutions implement and maintain a system of internal controls to ensure an ongoing BSA compliance program. An example of such a control is monitoring for suspicious activity, which one regulator explained can be costly and difficult, and time consuming for an institution to implement. With respect to training, several federal banking regulators said that some banks’ staff, even BSA compliance officers, may lack adequate BSA/AML training, especially when such staff are newly hired. The most frequently cited violations under Treasury’s BSA rules are similar across the banking regulators. These violations concern customer identification programs (CIP), CTRs, and requests for filing reports. For example, a violation of CIP requirements could mean that an institution did not implement a written CIP. An institution violating 31 CFR 103.22 did not adhere to the requirement regarding reporting currency transactions in excess of $10,000. Violations of 31 CFR 103.27 could mean that an institution failed to meet the filing and record-keeping requirements for CTRs, reports of international transportation of currency or monetary instruments, or reports of foreign bank and financial accounts. While regulators emphasized that no one factor could explain upward or downward trends in BSA violations, they cited several possible factors influencing these trends—the implementation of the FFIEC BSA/AML examination manual, additional training for examiners and the banking industry, banking regulators more clearly communicating their expectations to institutions, and institutions developing better AML programs. For example, one regulator said that implementing the examination manual may have contributed to a decline in violations by providing guidance to banks on identifying and controlling BSA/AML risk and promoting consistency in the BSA/AML examination process. However, another regulator said that the manual may have led to its increasing number of violations by providing better guidance to examiners. Appendix III provides further information on selected BSA/AML-related enforcement actions taken by all financial regulators. In response to violations, the federal banking regulators have issued thousands of informal enforcement actions but relatively few formal enforcement actions in recent years. For example, in fiscal year 2008, they issued a total of 3,416 informal and 37 formal enforcement actions. Federal banking regulators said that generally, informal corrective actions will suffice for technical noncompliance or the failure of a portion of the AML program that does not indicate that the entire program has failed. If a compliance violation is significant and remains uncorrected after an informal action has been taken against an institution, a federal banking regulator may then decide to take a formal enforcement action. Banking regulator officials said that formal enforcement actions are public and generally considered more stringent than informal actions because they address more significant or repeated BSA violations. Formal enforcement actions can include cease and desist orders, assessments of civil money penalties (CMP), or supervisory agreements, and are enforceable through an administrative process or other injunctive relief in federal district court. Federal banking regulators said they track enforcement actions through their various management information systems. Federal banking regulators reported that new interagency guidance has helped improve the transparency of BSA enforcement. In July 2007, the federal banking regulators issued the “Interagency Statement on Enforcement of Bank Secrecy Act/Anti-Money Laundering Requirements,” which clarified the circumstances under which regulators would issue a cease and desist order against a financial institution for noncompliance with BSA requirements. It does not address assessment of CMPs for violations of the BSA or regulators’ implementing regulations. Regulators that we contacted typically stated that the guidance has been beneficial. FDIC officials maintained that with the guidance, bank officials have a better idea of the factors FDIC and other banking regulators take into account before executing a cease-and-desist order. They added that the interagency statement advises that the appropriate regulator may take a different level of action depending on the severity and scope of the bank’s noncompliance. NCUA officials said they found that the guidance has led to more consistent enforcement actions taken among the banking regulators in response to cited deficiencies and violations. Both Federal Reserve and OCC officials suggested that the guidance provided more clarity about, or added transparency to, the circumstances under which the agencies will take formal or informal enforcement actions to address concerns relating to a bank’s AML program requirements. Federal banking and state regulators generally coordinate when necessary on BSA enforcement actions. For example, Federal Reserve officials said they usually take (and terminate) actions jointly with state regulators, and a bank must continue to comply with a joint enforcement action until both the Federal Reserve and the state authorities terminate the action. Accordingly, the Federal Reserve and state regulators typically terminate enforcement actions simultaneously. Officials from several state agencies said that as a general rule, they took informal and formal enforcement actions jointly with their federal counterparts, although some state agencies were likely to coordinate only formal actions. Several state officials reported taking few, if any, formal BSA/AML-related actions against depository institutions, especially credit unions. Several officials from institutions that were examined by multiple federal banking regulators, such as OCC and the Federal Reserve, said that these regulators coordinated well among themselves, while others indicated they were unsure or thought coordination could be improved. Bank officials had mixed views on coordination of enforcement actions between federal and state regulators; some thought the extent of coordination was sufficient, others thought it was lacking, and several simply did not know how extensively these regulators coordinated on enforcement. The enforcement actions that SEC, CFTC, and their SROs can use to address BSA compliance can be informal or formal. All SEC enforcement actions are public and formal actions, but the actions of its SROs include informal and formal enforcement processes. SEC staff said that most cited BSA/AML deficiencies are corrected through the examination process. Most examinations conclude with an institution sending SEC a letter stating how it will correct the compliance problem. FINRA officials also said that firms must document the corrective action to be taken to address any issues found during an examination. If SEC examiners find significant deficiencies with a firm’s BSA program, SEC staff may refer this to their Division of Enforcement or an SRO for enforcement. In accordance with its MOU, SEC also will notify FinCEN of any significant BSA/AML deficiencies. SEC’s Division of Enforcement will assess whether to proceed with an investigation, determine whether a violation has occurred, and if so, whether an enforcement action should be taken against the firm or any individuals. FINRA officials said their enforcement actions are typically fines, the amount of which may vary depending on the egregiousness of the compliance failures, the scope of conduct, and the overall risk of money laundering through the firm. In fiscal year 2008, SEC and the securities SROs took 25 formal enforcement actions against securities firms (see table 3). As shown in table 4, in both fiscal years 2007 and 2008, violations in policies and procedures and internal controls and annual independent testing were the most common AML-program-related violations among broker-dealers. With respect to BSA reporting requirements, in fiscal year 2007 the most common violations among broker-dealers were related to CIP requirements and required information sharing. In fiscal year 2008, the most common violations were CIP and SAR requirements. SEC staff said that many of the largest securities firms have had AML programs in place for a while and medium-sized or small firms had AML programs that could be improved. SEC and its SROs routinely share information about their enforcement activities. For example, FINRA officials said that they work with SEC if they are both investigating an institution to ensure they are not duplicating efforts. SEC and FINRA officials said that FINRA makes SEC staff aware of any significant BSA/AML violations prior to an enforcement action being taken. Further, in accordance with its MOU with FinCEN, SEC tracks its examinations, violations, and enforcement actions, and collects similar information from its SROs on a quarterly basis, which it then provides to FinCEN. While CFTC retains authority to issue enforcement actions against futures firms, its SROs have taken all enforcement actions for BSA/AML deficiencies to date. When CFTC becomes aware of potential BSA/AML violations, it usually refers the violations to a firm’s SRO for investigation and potential enforcement action, although SROs typically develop enforcement cases through the examination process. At the conclusion of an SRO examination, the SRO issues a report to the futures firm and notifies the firm of any deficiencies in its AML programs. SROs require futures firms to correct any material deficiencies prior to closing the examination. If the deficiencies are minor, SROs may cite the deficiency in the examination report and close the examination with no disciplinary action or require corrective action before closing it. If examination findings are significant, then SROs may start an investigation, during which internal committees at the SROs may review information collected during the examination and investigation and determine whether an enforcement action is warranted. SROs take only formal, public enforcement actions, and all rule violations and committee findings are made public. SROs resolve most enforcement cases related to violations of BSA/AML SRO rules by issuing a warning letter or assessing a fine. The amount of the fine varies depending on the severity of the violation. SROs also may take other types of actions for violations of their rules, such as suspension of membership or expulsion. NFA conducts the vast majority of examinations of futures firms and is responsible for all formal enforcement actions taken in recent years (see table 5). The number of BSA/AML-related enforcement actions initiated by NFA decreased from 21 in 2006 to 10 in 2007 and 8 in 2008. Officials added that when new requirements become effective, they usually see an increase in deficiencies related to the new requirements. NFA officials said they reduced the number of deficiencies cited by requiring firms to submit written BSA compliance programs for review during their membership application process. NFA officials said the most common BSA violations cited since 2003 were failure to have annual independent audits and failure to conduct annual BSA training of relevant staff. CFTC officials said they meet quarterly with SROs to review their open investigations and enforcement actions. If an SRO takes an enforcement action, it will send a copy of the enforcement action to CFTC. CFTC’s Division of Enforcement regularly tracked BSA violations investigated and charged by futures SROs, but it did not maintain statistics by the type of violation. Additionally, CFTC receives and reviews examination reports from all SROs, but did not compile BSA/AML examination statistics. In anticipation of finalizing the information-sharing MOU with FinCEN (which the agencies finalized in January 2009), CFTC recently began collecting BSA examination information from the SROs. (We discuss information-sharing MOUs later in this report). As previously discussed, IRS does not have its own or delegated authority to issue enforcement actions against NBFIs for BSA violations. If IRS finds BSA violations when examining an NBFI, it can send a letter of noncompliance (Letter 1112) and a summary of examination findings and recommendations to the institution, and also include an acceptance statement for the institution to sign. In response to the statement, the institution may agree to implement the recommendations and correct any violations. Generally, IRS would conduct a follow-up examination within 12 months after issuing the letter to determine if the corrective action were taken. In cases where significant BSA violations have been found or past recommendations have been ignored, IRS will refer the case to FinCEN to determine what, if any, enforcement action should be taken. IRS examiners and their managers make the initial determination to refer a case and then an IRS BSA technical analyst reviews the case to decide whether to forward the referral to FinCEN. IRS has referred approximately 50 cases to FinCEN since fiscal year 2006. The referrals include the facts of the case, a summary of the examination, and the violations cited. During fiscal year 2008, IRS reported citing 23,987 BSA violations and issued a Letter 1112 to 5,768 different institutions (see table 6). Table 7 provides a summary of the total number of institutions with one of the five violations IRS most often cites. Justice officials said they coordinate with financial regulators and FinCEN during criminal BSA investigations and when taking criminal enforcement actions. Most of Justice’s BSA cases against financial institutions start as investigations of individuals involved in illegal activities, such as drug trafficking or money laundering. Justice officials also said they have started investigations after receiving referrals from federal regulators. They indicated that having a financial regulator assigned to a Justice investigation can help investigators better understand the financial industry and BSA policies and procedures. Over the last 2 years, both OTS and the Federal Reserve have assigned examiners to Justice investigations. Justice officials work closely with institutions’ regulators to obtain and review their examination reports and workpapers, analyze SARs filed, and determine if any civil enforcement actions were taken against the institution. Justice officials said they will coordinate enforcement actions with financial regulators and FinCEN when feasible—checking with both to see if they are planning an enforcement action against the institution. According to Justice, the challenges of coordinating regulatory and criminal enforcement include grand jury secrecy requirements and the differing length and pace of investigations and negotiations. Justice officials said that all their BSA cases against financial institutions have involved systemic, long-term failures in the BSA program and substantial evidence of willful blindness on the part of the institution toward money laundering activity taking place through the institution. In 2005, Justice formalized procedures that require U.S. attorneys to obtain approval from Justice’s Asset Forfeiture and Money Laundering Section in cases where financial institutions are alleged to be BSA offenders. Attorneys are to consider factors such as the availability of noncriminal penalties, prior instances of misconduct, remedial actions, cooperation with the government, and collateral consequences of conviction—when determining what type of action, if any, should be taken. Justice officials said they instituted the procedures to provide more review of significant AML cases (in particular, the nature of the violation and its impact) and promote uniformity and consistency in enforcement approaches. According to Justice officials, the new procedures have been well received. Over the last 3 years, Justice took four criminal BSA enforcement actions against financial institutions (see table 8). All the actions resulted in deferred prosecution agreements (three against depository institutions). The remaining case represents the first criminal BSA enforcement action against an MSB. Justice announced each of the actions on the same day that FinCEN and the regulators announced their civil enforcement actions. The forfeiture amounts generally correspond to the criminal proceeds laundered by the institutions. FinCEN has increased resources dedicated to its regulatory programs and provided some effective regulatory support and outreach to industry; however, improvements could be made in its information-sharing efforts with regulators. From 2001 to 2008, FinCEN staff dedicated to regulatory efforts increased from 36 to 84. FinCEN has coordinated BSA regulation development and supported regulators’ examination processes in various ways, including providing input on examination guidance. In 2007, FinCEN created a new unit to provide outreach efforts, such as a helpline, that were well received by industry. FinCEN also has improved its management of referrals from regulators by replacing a paper-based system with an electronic one. However, the lack of an agreed-upon process for communication on IRS referrals may delay timely feedback to IRS-examined entities and allow these institutions to continue operating without correction after deficiencies are identified. Since our April 2006 report, FinCEN has increased the number of information-sharing MOUs with federal and state regulators and has taken steps to assess these MOUs. FinCEN and CFTC recently finalized an MOU, without which they previously did not have an agreed-upon framework for more consistent coordination and information sharing. FinCEN also has been discussing how to improve analytical support with the regulators. However, some state, securities, and futures regulators have limited electronic access to BSA data, which impedes their risk scoping for examinations and ability to independently verify audit information. FinCEN officials said they finalized a regulatory data-access template in July 2008 and have begun providing additional state regulators with direct electronic access, and anticipate providing expanded access to the federal functional regulators. Parallel to its increase in overall budget authority, FinCEN has increased resources dedicated to its regulatory programs. FinCEN officials said they consult with other regulators and examining agencies as necessary when developing rules and implementing regulations, provides examination support to regulators, and conducts BSA-related training sessions and events for industry and regulators. As shown in table 9, FinCEN’s budget authority and regulatory-dedicated staff have grown from fiscal year 2001 through fiscal year 2007. FinCEN budget authority grew from $38 million in fiscal year 2001 to $73 million in fiscal year 2007. Since 2005, the bureau’s budget authority essentially has been flat. From fiscal year 2001 through fiscal year 2007, the number of FinCEN staff dedicated to regulatory policy and programs approximately doubled, from 36 to 77. The total number of FinCEN staff increased nearly 75 percent from 174 to 302. FinCEN regulatory policy and program staff work in RPPD, which consists of the Offices of Regulatory Policy, Compliance, Enforcement, Regulatory Analysis, and Outreach Resources. According to FinCEN officials, these staff work on issues that involve multiple financial sectors, although many employees have subject matter expertise for particular industries or sectors. As of September 2008, FinCEN officials said that RPPD had a staff of 84. Since 2001, several regulators also have provided detailees to FinCEN to supplement expertise in particular areas or work on specific projects. For example, from 2007 through 2008, a detailee from the Federal Reserve worked on an industry survey about the potential effects of rule making related to FinCEN’s cross-border wire transfer study and served as a subject matter expert regarding payment systems. And from 2002 through 2005, two IRS detailees to FinCEN worked with RPPD to resolve multiple outstanding compliance issues. In addition, in 2005-2008, FDIC officials said that the agency provided 11 detailees to assist with report processing and other assignments. BSA provides Treasury with overall regulatory authority to administer the act and authorizes Treasury to issue regulations, sometimes jointly with federal financial regulators, to implement BSA requirements. FinCEN, the bureau within Treasury responsible for administering BSA, has overall responsibility for Treasury’s BSA regulatory program. Within FinCEN’s RPPD, FinCEN officials said that the Office of Regulatory Policy is responsible for developing, modifying, interpreting regulations and consults as necessary with other regulators and examining agencies. Depending upon the subject matter of a regulatory initiative, FinCEN officials said their interactions with regulators on BSA implementing regulations can range from extensive collaboration to a notification that regulations are available. In addition to meetings with regulators, FinCEN officials stated they obtain feedback from regulators on BSA issues through BSAAG and its multiple subcommittees. Referring to the USA PATRIOT Act, some federal agency officials observed that the development of some regulations was collaborative and an improvement compared with other processes in which the regulators were less involved. FinCEN officials said their work in recent years with SEC and CFTC—an outgrowth of the USA PATRIOT Act—generally has been collaborative, particularly given the newness of the securities and futures industries to the BSA/AML regulatory framework. SEC staff said they often met with FinCEN to discuss BSA issues (including rules development and related FinCEN guidance). Also, FinCEN sometimes participated in SEC’s quarterly BSA meetings with the SROs, discussing the scope of reforms and clarifying guidance or other issues. FINRA officials said that FinCEN and SEC directly collaborated on rules for broker-dealers, and FINRA was able to provide input in these discussions only through SEC. While FINRA officials said that they coordinated well with SEC, they felt that direct and earlier coordination with FinCEN on rule and guidance development would have increased the efficiency of the process. CFTC officials stated that work with FinCEN on drafting of futures-related BSA/AML rules and guidance has been collaborative. For instance, as required by BSA, FinCEN and CFTC jointly issued regulations in 2003 for futures commission merchants and introducing brokers requiring them to establish CIPs. However, according to CFTC officials, the rule resulted in some confusion about its applicability in situations where more than one futures commission merchant was involved in a transaction with the same customer. In April 2007, FinCEN and CFTC jointly issued guidance to clarify the responsibilities in such a transaction. NFA officials said the guidance has been well received by its members and clarified issues surrounding a firm’s BSA/AML role with its customers. FinCEN and IRS officials had differing views on the degree of collaboration that occurred during the revision of MSB-related regulations. As discussed previously, FinCEN and IRS completed a coordinated strategy in 2008 to better identify and select NBFIs for examination. The coordinated strategy states that FinCEN would work with regulatory partners to explore the feasibility of removing or exempting from the definition of MSBs certain types of transactions or subcategories of MSBs that pose relatively little risk of facilitating financial crimes. At the time of this report, FinCEN was in the process of incorporating revised MSB definitions into its guidance and regulations. Although legislation does not require FinCEN to conduct joint rule making on MSB issues, FinCEN officials stated that RPPD staff have briefed other offices and divisions in FinCEN as well as IRS, federal banking regulators, Treasury officials, various law enforcement agencies, and the BSAAG NBFI subcommittee on the proposed MSB rule making. The BSAAG NBFI subcommittee, of which IRS is a member, also sent a list of issues for FinCEN to consider when redefining MSBs, which FinCEN officials said they reviewed. FinCEN officials said they met with IRS staff in May 2008 to discuss the advanced notice of proposed rule making. According to FinCEN officials, they also developed a majority of their guidance and administrative rulings by reviewing questions received from the financial industry through their Regulatory Helpline (which institutions and regulators may call with questions) or other correspondence. For example, FinCEN officials said they review questions asked of the Office of Outreach Resources to determine what issues concern industry, and the results of the reviews are forwarded to the Office of Regulatory Policy. (We discuss the Office of Outreach and FinCEN helplines in more detail below.) FinCEN and RPPD’s Office of Compliance provide examination support for financial regulators in various ways. These methods include providing input on examination guidance and working with regulators to address specific issues (such as risk scoping). For instance, FinCEN actively participates in FFIEC working groups to revise the FFIEC BSA/AML manual and develop examiner training. In February 2007, FinCEN established a working group comprising federal and state agencies, with the goal of identifying and implementing several large initiatives to more effectively regulate and supervise the activities of MSBs. As previously discussed, FinCEN, IRS, and state regulators worked together in this forum to develop an MSB BSA/AML examination manual that was issued in December 2008. FinCEN officials said they will work with IRS and the manual working committee to develop a roll-out plan and provide training to IRS and state examiners, and the working group will continue to meet to address other MSB-related issues. FinCEN also has reviewed SEC’s and its SROs’ nonpublic examination procedures. Additionally, SEC and FinCEN cooperated to develop Web- based tools (“AML source tools”) that compile applicable BSA/AML rules and regulations for mutual funds and broker-dealers as well as other helpful information and contacts. SEC staff stated that they also developed “plain English” guidance on the examination process to be made public in response to further industry requests for access to SEC’s nonpublic examination module. SEC provided the draft guidance to FinCEN for its input; however, FinCEN officials said their review is on hold because their staff are working on other priorities and industry already has the AML source tools as guidance. While FinCEN has worked similarly with CFTC on guidance to its industry, FinCEN officials said that CFTC’s SROs have not provided their examination module and procedures to FinCEN but intended to do so after the information-sharing MOU between FinCEN and CFTC was finalized. However, FinCEN and CFTC officials stated they have held meetings on the examination procedures of futures SROs. As part of the effectiveness and efficiency initiative announced by the Treasury Secretary in June 2007, FinCEN has been studying how the regulatory agencies are approaching risk scoping for examinations. Its goal is to develop new tools and guidance that would enable agencies to better direct their examination resources. FinCEN officials stated they evaluated tools and processes that allow examiners to analyze information and patterns in BSA data from a specific institution to help identify areas that may require closer review, and jointly identified ways to enhance these tools. For example, FinCEN officials said they and the federal banking regulators are developing an enhanced BSA data analysis tool to incorporate into pre-examination scoping processes that will allow the federal banking regulators to better target their resources. Federal banking regulator officials stated that the tool would help them better analyze BSA data information for a particular institution, but not to conduct analyses across institutions. In addition to supporting regulators’ examination efforts and undertaking- process- or issue-specific initiatives, FinCEN officials said it also has produced targeted financial institution analyses. These are produced after a regulator makes a specific request for detailed analytic information related to a particular institution or individual. Office of Regulatory Analysis staff said they have collaborated with regulators to produce 42 such reports during fiscal year 2007 and through the first three quarters of fiscal year 2008. With respect to its role in term’s of achieving greater BSA/AML examination consistency, FinCEN officials stated that, resources permitting, they would like to increase their efforts in areas such as examiner training, developing and providing additional compliance referrals to regulators, periodically joining examiners in the field, and conducting additional macro-level analysis of BSA compliance. (We discuss FinCEN’s analytical products in a later section.) FinCEN officials said they have held various meeting with regulators to discuss their examination processes, but that they have not held meetings inclusive of all regulators. Further, as discussed previously, without an information- sharing MOU in place, FinCEN had been unable to obtain examination procedures for the futures industry—hindering its ability to review issues of BSA/AML examination consistency. FinCEN has implemented new outreach initiatives and conducted support efforts on BSA guidance that were well received by industry. The Office of Outreach Resources was created in 2007 and has primary responsibility for operating the Regulatory Helpline that industry and regulators may call with BSA-related questions. FinCEN staff also operate the Financial Institutions Hotline, which financial institutions may call to report suspicious activity related to terrorist financing. For the past 3 years, FinCEN has surveyed customers who use the Regulatory Resource Center—which includes the Helpline and FinCEN’s Web site. According to FinCEN’s surveys, in all 3 years, FinCEN staff calculated more than 90 percent of respondents—primarily industry representatives—favorably rated the guidance they received. FinCEN officials said that as part of its efforts to make the administration of BSA more efficient and effective, FinCEN published proposed rules in the Federal Register in November 2008 that centralize, without substantive change, BSA and USA PATRIOT Act regulations to a new chapter within the Code of Federal Regulations. FinCEN officials said that the proposed rules would streamline BSA regulation into general and industry-specific parts, with the goal of enabling financial institutions to more easily identify their BSA responsibilities. The Office of Outreach Resources also coordinates with BSAAG and supports speaking engagements to the financial industry and regulatory groups. FinCEN officials told us they have facilitated BSAAG subcommittee meetings (such as ones on banking, insurance, law enforcement, SARs, and securities and futures) throughout the year. In 2007, FinCEN reported participating in almost 100 domestic and overseas outreach events on BSA issues relating to banking, securities, futures, MSBs, jewelers, casinos, insurance companies, and credit unions. Industry officials with whom we spoke generally were positive about FinCEN’s outreach to industry, including these events and some of the public products available on FinCEN’s Web site. Banking industry association officials felt that FinCEN had been helpful in listening to concerns of the banking industry. Securities industry officials stated they thought FinCEN had been very responsive to inquiries from broker-dealers and found some of FinCEN’s publicly available reports to be very useful, including “SAR Activity Review: Trends, Tips, and Issues” and mortgage fraud reports. FinCEN officials presented these reports at events and included a discussion of how SARs have contributed to law enforcement investigations. A representative of a futures firm with whom we spoke said the firm used the SARs publications as part of its training program. Securities SRO officials said they felt FinCEN was doing an excellent job of industry outreach, in particular showing the industry how BSA data filings were used effectively to prosecute money laundering and other financial crimes. In January 2008, FinCEN’s Office of the Director—with participation from RPPD, the Analysis and Liaison Division, the Technology Solutions and Services Division, and the Office of Chief Counsel—began a new outreach program to the financial community. By developing a better understanding of the needs and operations of institutions, FinCEN officials suggested that the agency will be in a better position to help institutions effectively operate BSA/AML programs. The outreach program’s goals include learning how institutions’ BSA/AML programs and analytical units operate. The first stage of the outreach program is targeted to the 15 largest depository institutions. According to FinCEN, they will expand outreach to other depository institutions and industry sectors, but have not finalized the timetable for the later stages of the program. In 2006, FinCEN implemented an automated Case Management System (CMS) to track its processing of BSA compliance referrals, which replaces a paper-based system. While its efforts to track referrals have improved, FinCEN processing times for IRS referrals, combined with IRS’s limited enforcement authority, may have limited IRS’s BSA compliance activities among NBFIs. According to their MOUs with FinCEN, the federal banking regulators, SEC, and IRS are to inform FinCEN of any significant potential BSA violations and provide BSA-relevant examination reports. In 2006, FinCEN implemented an automated system—CMS—to track these BSA compliance referrals. Prior to CMS, FinCEN tracked BSA compliance referrals manually through a paper-based system. FinCEN officials stated that CMS enables RPPD’s Offices of Compliance and Enforcement to track cases from receipt to final disposition, analyze the data, and produce management reports. Figure 2 depicts the overall process by which FinCEN receives and tracks these referrals. As shown in figure 2, the Office of Compliance receives referrals from regulators or referrals that are self-reported by institutions and, after receipt, opens corresponding cases in CMS. These matters are assessed by compliance specialists who, in making their assessment of each referral, consider factors such as the type of violation and number of times it occurred; whether the violation was systemic or technical; hether the violation was willful or a result of negligence; ow long the deficiency existed; and hether the violation surfaced through self-discovery or an examination. ompliance staff must complete the initial assessment within 60 days, C after which the case is reviewed by a compliance project officer, the c compliance. As part of these assessments, Office of Compliance staff m request additional data analysis from the Office of Regulatory Analysi additional documentation from the institution’s regulator. Federal ba regulator and SEC staff confirmed that FinCEN staff have requested additional information about their referrals. ompliance program manager, and, finally, the assistant director of After a referral is assessed, Office of Compliance management decide (1) close a case with no whether to take one of the following actions: action; (2) send a notification letter to the institution indicating that the regulator informed FinCEN of the matter, and nothing precludes FinCE from further action if FinCEN or the regulator finds that all corrective actions have not been implemented; or (3) present the matter to FinCEN Regulatory Enforcement Committee. FinCEN officials estimated that its Office of Compliance has forwarded approximately 6 percent of referrals to its Office of Enforcement. The Regulatory Enforcement Committee consists of compliance and enforcement staff who review the case and decide whether to forward i Enforcement for further investigation After it is decided that a case be referred to the Office of Enforcement, the case is closed by Office of Compliance staff in CMS and the Office of Enforcement opens a new Enforcement case in CMS. t to the Office of FinCEN officials said that the fundamentals of the enforcement investigative process are th referrals. And, as with Compliance staff, Enforcement staff may request additional data analysis or documentation when making their dec isions. They document their investigation in a recommendation memorand the Assistant Director of the Office of Enforcement. After the assistant director has reviewed the case, Enforcement staff contact the referring agency to discuss the matter. If no action is warranted, Enforcement e same, regardless of the source of the um to closes the case. If a CMP is warranted, Enforcement issues a charging letter to the financial institution. The financial institution is required to respond in writing within a specified period (usually 30 days from the of the letter). The assistant director and an enforcement specialist thenreview the financial institution’s written response to determine whether proceed with a CMP negotiation meeting or close the matter with an alternative action, such as a warning letter, or no action. FinCEN Enforcement officials said that if a warning letter is issued, it will be routed internally for approval through the Associate Director of RPPD a copy will be sent to the relevant regulator. FinCEN’s Director ite an October 2008 speech that FinCEN considers enforcement actions o when a financial institution exhibits a systemic breakdown in BSA compliance that results in significant violations of its BSA obligations. Table 10 shows the number of referrals RPPD received during fiscal years 2006 though 2008, the number of cases closed within the Office of Compliance and Enforcement, and average processing times. According to IRS officials, long delays in processing referrals and a lack an agreement on time frames have limited IRS’s BSA compliance activitie among NBFIs. Unlike the federal financial regulators that have independent enforcement authority to issue informal and formal enforcement actions, IRS officials can send only a Letter 1112 to an institution, which includes a statement that a copy of their report is of s required to be sent to FinCEN and that FinCEN will determine if penalties under BSA are to be imposed (see discussion in previous section). Therefore, when IRS finds an NBFI with significant BSA deficiencies, it must refer the case to FinCEN for further action. In fiscal years 2006— 2008, IRS sent approximately 50 referrals to FinCEN. After a referral is made to FinCEN, IRS officials said they do not conduct a follow-up visit with the institution to determine if corrective action has been taken until FinCEN makes a determination on the referral, as they do not want t any actions that might negatively affect a potential FinCEN enforcement action. IRS officials believe FinCEN’s response time is too long. FinCEN officialsstated that IRS referrals often require follow up for additional information or supporting documentation which affects processing times. As noted in table 10 above, FinCEN’s average processing time for all referrals in fiscal year 200 8 was 208 days in its Office of Compliance and an additional 277 days if a case was referred to its Office of Enforcement. IRS and FinCEN officials met in early 2008 to discuss processing times and what information an IRS referral should contain. IRS officials said they have seen progress in the last several months, with more IRS referrals being processed. Although IRS officials stated that they would like an agreement with FinCEN on referral processing times, no formal agreement has been negotiated. FinCEN officials said that they do not have established time frames for responding to referrals because response time often v depending on the thoroughness of the referral and the need for follow u with the examiner. They said that processing of referrals also depends o interagency coordination. For example, law enforcement authorities might ask FinCEN to refrain from advancing certain cases because of pending criminal investigations. While FinCEN and IRS recently have been meetin more frequently to discuss IRS referrals, no formal agreed-upon proces exists to address IRS referral issues and provide more timely feedback to IRS-examined institutions on their AML efforts. The lack of an agreed- upon process for handling referrals, combined with IRS’s inability to take certain enforcement actions on its own, may result in these institutions s continuing to operate without correction, potentially remaining out of compliance with BSA. FinCEN officials have increased the number of information-sharing MOUs with regulatory agencies, which has improved coordination of enforcement actions and BSA data reporting for the banking and secur industries. FinCEN officials said that through the information-sharing MOUs they made progress in developing their relationships with the federal banking regulators, SEC, and IRS. Since our April 2006 report, FinCEN had implemented an MOU with SEC (in December 2006), and as of October 2008, established MOUs with 46 state agencies. After several years of drafting, FinCEN and CFTC finalized information-sharing and data-access MOUs in January 2009. FinCEN officials said that the MOU process significantly increased the level of information sharing with the federal banking regulators since its implementation in 2004. FinCEN officials also said that the federal banki regulators made good faith efforts to comply with the MOU and provide FinCEN with reports on time. Officials from most federal banking regulators stated that their 2004 MOU significantly strengthened interaction with FinCEN and provided structure for coordination on enforcement actions and information sharing. In addition, FinCEN’s Director together with Treasury’s Under Secretary for Terrorism and Financial Intelligence meets quarterly with the principals of the five federal banking regulators to discuss coordination and BSA admini for the industry. While federal banking regulator officials emphasized that they may ta enforcement actions independent of FinCEN under their own authorit they ensure that FinCEN is aware of these actions as agreed upon in the MOU with FinCEN. Federal Reserve officials said that such information sharing generally issues that are resolved through informal and formal enforcement actions. They explained that when taking an informal action—such as a commitment letter or MOU—they provide notice to FinCEN. OTS officials involves referral of all BSA/AML-related examination said they have quarterly meetings with FinCEN during which they discuss any BSA-related informal or formal actions, as well as any related matter Moreover, federal banking regulators said they make FinCEN aware of formal actions, such as CMPs or written agreements, well in advance of when the actions will be taken. For example, if the regulators ar impose a CMP, they will inform FinCEN early enough to ensure the process is fully coordinated. Federal Reserve officials said that since the s. e going to 2004 MOU, they imposed all BSA/AML-related CMPs concurrently with FinCEN penalties. NCUA officials also said they make FinCEN aware of informal and formal actions, and would coordinate with FinCEN prior to the issuance of a CMP, if necessary. OCC officials said they also coordinate any CMPs with FinCEN and that in recent years FinCEN has been much quicker in assessing CMPs in conjunction with OCC. They cited a case prior to the implementation of the MOU—the Riggs Bank case—during which they said they had to wait more than a year to issue a CMP in coordination with FinCEN. FDIC and OTS also noted they have worked closely with FinCEN in the past few years on the development of BSA/AML-related enforcement actions against several institutions. (App. III contains examples of BSA/AML-related enforcement actions.) Several federal banking regulators also cited their 2004 MOU with FinCEN as beneficial in terms of improving agencies’ internal processes for tracking violations and enforcement actions. Some federal banking regulator officials said that as part of responding to the information- sharing requirement of the MOU (that is, providing FinCEN with quarterly BSA examination, violation, and enforcement data), they established centralized, automated data collection programs that have improved the quality of their BSA examination data. For instance, FDIC officials said their agency internally standardized the processes for collecting BSA data as a result of the MOU. Federal Reserve officials also reported that enhancements to the agency’s data management system have streamlined the information it gathers for FinCEN under the MOU. While federal banking regulators have made improvements in their systems for collecting and reporting BSA/AML-related data, differences remain in how they cite violations. In our 2006 report, we found that federal banking regulators were using different terminology to classify BSA noncompliance and recommended that FinCEN an d the federal banking regulators discuss the feasibility of developing a uniform classification system. Since our report, FinCEN and the federal banking regulators established an interagency working group that is reviewing guidance relating to the citing of BSA violations and is considering additional guidance on citing systemic versus technical AML violation One federal banking regulator stated that while BSA/AML violation is generally comparable, federal banking regulators have different definition for the same terms. However, to implement their MOU, FinCEN official said that they discussed what a “significant violation” means and tha came to agreement (see previous discussion). s. SEC and FinCEN staff stated that their December 2006 MOU had been beneficial overall, although it is still in the relatively early stages of implementation. Pursuant to their MOU, SEC shares examination findings with FinCEN after a significant BSA deficiency is found. For enforcement actions, SEC provides notice to FinCEN prior to the action becoming public. In addition, SEC receives information from the SROs about BSA/AML-related significant deficiencies or potential enforcement a and provides that information to FinCEN. SEC and FinCEN staff said the MOU is still in the early stages of implementation and SEC and FinCEN recently met and reached agreement on steps to further coordination. SEC staff also said that its agency’s MOU with FinCEN has provided a framework for the quarterly collection and reporting of BSA/AML examination, violation, and enforcement action data. While SEC staff stated they had provided FinCEN with data prior to the MOU, it was on a more limited basis. Prior to the MOU, SEC cited BSA violations under provisions of the USA PATRIOT Act. Under the MOU, SEC cites BSA, which allows for more specific citations. As a result, under the MOU, SEC provides additional examination information regarding BSA violation categories and subcategories. For example, SEC previously would cite a violation relating to CIPs under Section 326 of the USA PATRIOT Act. Because of the MOU, SEC can determine which of the multiple subcategories of BSA it may cite for deficiencies in a firm’s CIP. (See table 3 earlier for these data.) CFTC, the last federal functional regulator to sign an information-sharing MOU with FinCEN, had no agreed-upon formal mechanism by which to coordinate or share information with FinCEN until the MOU was finalized in January 2009. CFTC officials stated they approached FinCEN about developing an MOU in fall 2004. CFTC and FinCEN cited delays on the part of both parties in moving forward with the MOU. In fall 2008, CFTC officials said that they developed standard procedure for obtaining BSA/AML examination information from i anticipation of the MOU’s finalization. Specifically, CFTC developed templates that identify the episodic, quarterly, and annual report data that will be required to be reported under the MOU and already had rec reports from its SROs as of fall 2008. Previously, CFTC did not compieived le BSA/AML examination statistics, including information on the types of violations cited. Further, FinCEN officials said that CFTC’s SROs have no provided their examination modules and procedures to FinCEN but theyintended to do so after an MOU with CFTC is finalized. Without an MOU in place, CFTC’s and FinCEN’s abilities to evaluate BSA/AML compliance in the futures industry were limited. For example, without examination procedures and data, similar to that provided by other regulators, FinCEN was not able to evaluate the extent to whichBSA/AML regulations were being examined consistently in the futures industry in relation to other sectors. Further, without such information FinCEN and CFTC were not able to jointly determine areas of BSA compliance weakness and better target guidance or outreach efforts. According to best practices for collaboration, federal agencies engaged i collaborative efforts should create the means to monitor, evaluate, a report their efforts. FinCEN and CFTC officials recognized the benefit an MOU and developed information-sharing and data access MOUs (see later discussion on data access) that were completed in January 2009. While some improvements have been made, FinCEN and IRS disagree on aspects of their MOU and are discussing methods to improve coordinat IRS officials said they asked to renegotiate the terms of the MOU as they said that receive very little benefit from their MOU with FinCEN but that FinCEN has declined, saying the MOU is only 3 years old. However, FinCEN officials said they are in frequent communication with IRS regarding the operation of their MOU and provided documentation of some of these meetings. IRS officials said they believe some of the information they are asked to collect and provide under the MOU is of little use to FinCEN. For example, IRS officials did not think FinCEN ion. made use of IRS’s reports of the numbers of Form 8300 and Report of Foreign Bank Account examinations and violations. According to IRS officials, FinCEN has not held a formal meeting with IRS to discuss the implementation of the MOU, as required by the MOU. However, FinCEN officials stated they have frequent meetings with IRS staff on improving various aspects of BSA administration and information- sharing processes under the MOU. For example, due to recent meetings with FinCEN, IRS officials said that FinCEN improved its time frames for providing responses in cases when IRS officials send FinCEN technical questions they have about BSA compliance in their supervised entities. FinCEN officials said that in creating their 2008–2012 strategic plan, they revised goals and performance measures to respond to an assessment and recommendations from the Office of Management and Budget. For fiscal year 2006, the Office of Management and Budget rated Treasury’s BSA administration as “results not demonstrated,” and FinCEN received low ratings for developing outcome-based performance measures and achieving program results. In fiscal year 2007, a FinCEN working group examined what would constitute meaningful performance measure s for the BSA program. The working group measures how effectively MOU holders believe their MOUs facilitate information exchange. In 2008, FinCEN completed a survey of cust perceptions of the services it provides to the federal and state agencie with which it has information-sharing MOUs. Using results from multiple survey questions, FinCEN staff stated they created a public performance developed an MOU compliance metric, which measure and calculated that 64 percent of MOU holders surveyed found FinCEN’s information sharing valuable in improving regulatory consistency and compliance in the financial system. FinCEN has set a goal of increasing results for this measure by 2 percentage points annually. Through the survey, FinCEN officials said they also obtained 26 written comments, 14 of which offered suggestions for improving information- sharing MOUs (for example, by providing more communication and feedback). FinCEN has taken steps to improve analytical products for regulators to assist them with their BSA/AML compliance efforts and has been discussing additional products. While some regulators have direct electronic access to BSA data, others have access only through other agencies. For example, FINRA conducts the vast majority of broker-dealer examinations and does not have direct electronic access to BSA data; instead, it must go through FinCEN or SEC to obtain data. FinCEN officials said they finalized a regulatory data-access template in July 2008 and have begun providing additional state regulators with direct electronic access, and anticipate providing expanded access to the federal financial regulators. A FinCEN official said that they are working on data-access MOUs for SROs. Under their information-sharing MOUs, FinCEN is to provide analytical products to regulators. As it collects and manages all BSA-related data, FinCEN is in an optimal position to produce analytical products that assess BSA-related issues within and among financial sectors and regulators. FinCEN classifies the analytical reports it produces for its stakeholders into two categories: reactive and proactive. As discussed earlier, FinCEN conducts targeted financial institution analyses for regulators at their request. These analyses are considered reactive reports. As of September 2008, FinCEN’s proactive reports included strategic BSA data assessments, “By the Numbers” reports (such as its SAR reports), state-specific BSA data profiles, and reports of possible unregistered and unlicensed MSBs (produced for IRS). FinCEN stated that the issues for which it chooses to conduct “strategic BSA data assessments” vary. For example, FinCEN officials said it produced a residential real estate assessment after it produced an initial report on commercial real estate as a possible venue for money laundering. FinCEN also conducted an assessment of mortgage fraud after its Office of Regulatory Analysis observed a spike in SAR filings related to mortgage loan fraud. FinCEN officials said that it takes about 4-6 months to produce such assessments, but that they expect this time would be significantly shortened after FinCEN’s planned modernization of the BSA database. While the reports are not produced on a regular schedule, FinCEN officials said that it has at least one assessment underway at all times. FinCEN also biannually produces “By the Numbers” public reports that compile numerical data from SARs and supplement the “SAR Activity Review—Tips, Trends, and Issues” and state-specific BSA data profiles showing analysis of BSA filing trends within the 46 states agencies with which FinCEN has information-sharing MOUs. FinCEN began producing “State BSA Data Profiles” in May 2007 and said it had received input and some positive feedback from state and federal banking regulators. Moreover, some industry officials told us that these publicly available SAR reviews were very useful components of FinCEN’s outreach efforts. In 2008, FinCEN, after discussions with SEC, began providing SEC with reports of securities-related SARs filed by depository institutions. The purpose of these reports is to alert SEC to any possible securities violations observed by depository institutions. To compile the reports, FinCEN analysts search on key terms provided by SEC. SEC staff said they have found these downloads very useful to their general enforcement and examination programs. Approximately each quarter since June 2006, FinCEN has issued reports on possible unregistered and unlicensed MSBs (found by reviewing SARs filed by depository institutions). IRS officials have used the information to contact and register previously unregistered MSBs. IRS officials also telephone the unregistered MSBs to make sure the entities understand their BSA obligations. Despite the provision of more analyses, most MOU holders with whom we spoke thought different or additional FinCEN analysis would be useful for their BSA compliance activities and have been discussing such products with FinCEN. In particular, some federal banking regulators said that the summary reports of numbers of examinations, violations, and enforcement actions among depository institutions that FinCEN provides them on a quarterly basis were of little use as they were compilations of data the federal banking regulators had given FinCEN. Although FinCEN provides analyses of issues after reviewing data and reports, federal banking regulator officials thought it would be more beneficial to receive analytical information to assist them in examination preplanning and scoping processes, which would allow them to better focus their BSA/AML resources and efforts. Federal banking regulators have cited requests regarding additional analysis made to FinCEN through the FFIEC BSA/AML working group. For instance, several federal banking regulators have requested state, regional, and national analysis of CTRs and SARs by type of institution, and additional analysis of MSBs and 314(a) hits. As they have limited access to BSA data, federal banking regulators are unable to conduct these analyses themselves. (We discuss data access issues in the following section.) IRS officials said they wanted reports similar to what FinCEN provides to law enforcement, such as analyses of potential money laundering regarding the U.S. southwest border. IRS officials said such reports would be helpful in determining where to allocate the agency’s examination resources. FinCEN officials said that they provide IRS (along with the federal banking regulators) a consolidated package containing the annual BSA data profiles for all states and certain U.S. territories. SEC staff they have had at least two discussions with FinCEN staff about analytic products that FinCEN could provide and they expected further discussions would take place. FinCEN officials stated they needed to concentrate on providing products that could benefit multiple agencies to ensure they were using FinCEN resources effectively. As part of its efficiency and effectiveness initiative, FinCEN said it has identified ways it could increase its analytical support to regulators by providing products with useful information on macro-level risks. FinCEN officials said they are incorporating steps into its information technology modernization plans that will make the development of these products more feasible. FinCEN said it has been developing analyses of 314(a) hits to better inform regulators. In addition, one federal banking regulator and FinCEN have agreed to different approaches for obtaining supplemental BSA data analysis. In fall 2008, FDIC officials completed arrangements to have an FDIC analyst work at FinCEN on a part-time basis and that analyst began work with the Office of Regulatory Analysis. FinCEN officials said that they are open to detailees from more regulators as it would also help them understand better which types of analysis are more useful to the regulators. With the exception of IRS, which maintains and stores all BSA information filed, FinCEN has developed data-access MOUs with some financial regulators to provide them with direct electronic access to BSA data. However, the level of access across financial regulators is inconsistent and has inhibited agencies’ compliance activities. For example, FinCEN provides the federal banking regulators with access to CTRs for depository institutions, SARs for depository institutions, and other reports. Federal banking regulators access this information through a secure system but are limited to downloading a certain number of records at a time. Officials from some federal banking regulators said that access to SARs or CTRs filed by institutions other than depository institutions would be useful. One official explained that some institutions, while regulated by others, can be affiliated with their supervised institutions. For example, an MSB may file a SAR on a bank’s customer, but the federal banking regulator does not have access to the SAR filed by the MSB. Unlike other federal banking regulators, OCC officials arranged with FinCEN to receive SAR data directly. For about 5 years, OCC has received a monthly compact disc with SAR data for the banks it regulates. With these data, OCC created the “SAR Data Mart,” which its staff use to take action against unlawful activity committed by depository institution insiders and for evaluating operational risk. OCC staff have found the ability to conduct is own analyses very useful. SEC staff said they use their direct access to BSA data to review approximately 100 to 150 SARs for securities and futures firms daily. Furthermore, SEC staff said their access to these SARs has expanded their SAR review activities and enhanced SEC’s enforcement and examination programs. In contrast, futures and securities SROs (including FINRA) and some state agencies that conduct BSA/AML examinations currently do not have direct electronic access to BSA data. Some of these regulators’ requests for such access have been pending for several years. FINRA—which conducts the majority of broker-dealer examinations (more than 2,000 in fiscal year 2008)—does not have direct electronic access to BSA data and must request SARs through SEC and FinCEN. With direct electronic access, FINRA and state agency officials told us they could more effectively risk scope their examination processes. Risk scoping by regulators may include reviewing the number of SARs and CTRs filed by institutions under their supervision to identify areas within an institution’s program or which institutions among their supervised entities on which to concentrate, enabling regulators to better plan their examinations and target their resources accordingly. As discussed above, federal banking regulators use BSA data to risk scope their examinations. Further, due to the large number of examinations they conduct, FINRA officials said it would strain SEC’s resources if FINRA asked SEC staff for access to every SAR filed by the institution under review. Therefore, FINRA staff request SARs from FinCEN primarily when FINRA staff suspect a firm may not have filed all the SARs it says it filed. FINRA officials said they often experienced delays in receiving the information. They also said they started to develop an MOU with FinCEN in 2002; however, the last time FINRA discussed data access with FinCEN was in March 2006. CFTC is the last federal functional regulator to be provided direct electronic access to the BSA database. CFTC officials said that they made a formal request for direct access to BSA data in 2005. FinCEN officials said that, until recently, FinCEN and CFTC had not agreed on the terms of an electronic access MOU for BSA data. FinCEN and CFTC signed a data- access MOU concurrently with their information-sharing MOU in January 2009. Previously, if CFTC wanted BSA information, it had to make case-by- case requests to FinCEN. Similar to FINRA, CFTC officials said while FinCEN responded quickly to emergency BSA data requests, nonemergency requests could take much longer. CFTC officials said that the data-access MOU will permit CFTC to make BSA database inquiries in certain circumstances on behalf of an SRO. They said that they recognize the unique and highly sensitive nature of BSA data and providing the SROs with direct access to BSA data presents certain legal and regulatory oversight issues. FinCEN explained it has been conducting a comprehensive evaluation of data access issues. In September 2008, FinCEN completed a bureau-wide initiative to better define the types of regulatory agencies to which it will provide electronic BSA data access and the criteria and processes for evaluating data access requests. FinCEN determined it would consider requests from agencies that examine for BSA compliance; supervise a financial institution for safety and soundness or financial responsibility; issue licenses or charters to financial institutions; or administer or enforce laws, regulations, or rules affecting financial institutions or markets. In evaluating these requests, FinCEN officials said that staff look at the requester’s regulatory authorities, ability to protect sensitive BSA data, and ability to utilize confidential information. But they said that SROs present unique issues because of their status as private actors, rather than governmental authorities. Although FinCEN said it anticipates providing SROs with access to appropriate data, their nongovernmental status requires FinCEN to contemplate appropriate access restrictions. FinCEN officials said they finalized a regulatory data-access template in July 2008 and have begun providing additional state regulators with direct electronic access, and anticipate providing expanded access to the federal financial regulators. A FinCEN official said that they are working on data-access MOUs for SROs. Without electronic access to BSA data, some regulators cannot effectively scope risks for examinations, affecting their ability to efficiently plan examinations and target limited resources to areas of greatest risk. In addition, without direct access, in accordance with their examination procedures they cannot verify information that institutions are reporting on their BSA filings without requesting this information from FinCEN or another regulator who has access, thereby straining already limited resources. For example, as discussed above, to obtain access to some SARs, some regulators (such as FINRA) must contact FinCEN for access, further expending FinCEN’s and their limited resources. Through the USA PATRIOT Act, more activities of a larger number of financial institutions have come under the umbrella of U.S. anti-money laundering efforts. As the BSA regulation framework has expanded, it also has become more complex—making it all the more important that FinCEN and the regulators establish effective communication and information exchanges to achieve their common goals. While the regulators take different approaches to examination and enforcement within their jurisdictions, they all have responsibilities in the BSA/AML regulatory framework. Additional AML legislation has increased the number of financial institutions that have come under the scope of BSA, as well as regulators’ interactions on these issues within and across their respective financial sectors. At the time of our 2006 report, the federal banking regulators and FinCEN already had achieved agreement on how to address some key aspects of BSA compliance and enforcement and developed a common examination manual. Since that report, FinCEN and the regulators have made additional progress in ensuring the soundness of the current compliance and enforcement framework. While many improvements in the coordination among stakeholders—FinCEN, regulators, law enforcement, and the industries being regulated—have occurred, other working relationships among the stakeholders are not as efficient and effective as they could be. IRS has not fully leveraged its resources with those of state regulators to conduct examinations of MSBs. As a result of IRS not sharing its examination schedules with state agencies, state agency officials told us they sometimes have scheduled examinations shortly after IRS had completed examinations on the same institutions, subjecting them to duplicative monitoring. With approximately 200,000 MSBs in the United States, better coordination of examination scheduling between IRS and its state agency partners would both better leverage limited government resources and minimize the burden placed on those being regulated. Additionally, ongoing meetings such as those of BSAAG provide for some exchange of information, but some important regulatory issues cannot be discussed at meetings at which industry is present. While it is useful to have forums in which the regulators and the regulated exchange information, the sensitive nature of some BSA issues and the nonpublic nature of some examination modules suggest that an additional forum for regular information exchange among all the regulators is called for. Whether it is coordination of efforts between IRS and state regulators or among federal regulators, opening additional avenues for collaboration can (1) facilitate the exchange of best practices and better leverage limited regulatory resources, (2) minimize the regulatory burden on those being regulated, and (3) most importantly, see that the critical concerns embodied in BSA legislation are efficiently and effectively carried out. FinCEN has taken many significant steps to improve execution of its BSA administrative and coordination responsibilities, but could make improvements in three areas: sharing information with CFTC, improving communication on IRS referrals and ensuring timely feedback to IRS- examined institutions, and reconciling outstanding data access issues. FinCEN also serves as the BSA data manager and provides the regulators with access to critical BSA data related to their supervised entities. With these data, regulators are able to scope risks for their examinations, better target their resources, and independently verify BSA data filings. However, CFTC only received electronic access in January 2009, and securities and futures SROs, and some state agencies do not yet have electronic access to BSA data. With today’s rapidly changing financial markets and the relationship of the futures industry to other sectors of the financial markets, it is especially important that SROs receive electronic access to BSA data to facilitate their examinations. Furthermore, IRS is hampered in carrying out its BSA-related compliance responsibilities because of uncertainties about when FinCEN will take action on IRS’s referrals. Since IRS does not have enforcement authority in this area, it is important that IRS and FinCEN develop a process that facilitates communication on IRS referrals. Without timely feedback, MSBs may be allowed to continue operating in violation of BSA statutes. Finally, delays in completing data- access agreements present obstacles to some regulators attempting to carry out their BSA-related responsibilities. While FinCEN is justified in its concerns about sharing very sensitive information, the delay in establishing information-sharing and data-access MOUs with CFTC, and the failure to establish data access MOUs with SROs and some states that also have important BSA-related responsibilities, presents a different set of potential problems, such as incomplete risk-scoping of examinations. While we commend FinCEN and CFTC for finalizing their MOUs, the benefits of the agreements will take some time to be realized. Until then, the potential ramifications include less assurance on the part of regulators that these financial institutions are complying fully with the BSA. Taking steps to resolve these areas of concern could provide tangible benefits in the BSA-related efforts of the regulators and build on recent improvements that FinCEN has made in its administrative and coordination responsibilities. To reduce the potential for duplicative efforts and better leverage limited examination resources, we recommend that the Commissioner of IRS work with state agencies to develop a process by which to coordinate MSB examination schedules between IRS and state agencies that conduct BSA examinations of MSBs. Further, to build on improvements made in examination processes vital to ensuring BSA compliance, we recommend that the heads of FinCEN, the Federal Reserve, FDIC, OTS, OCC, NCUA, SEC, CFTC, and IRS direct the appropriate staff to consider developing or using an existing process to share and discuss information on BSA/AML examination procedures and general trends regularly in a nonpublic setting. We recommend that the heads of SEC and CFTC consider including the SROs that conduct BSA examinations. To improve its efforts to administer BSA, we recommend that the Director of FinCEN expeditiously take the following two actions: Work with the Commissioner of IRS to establish a mutually agreed-upon process that facilitates communication on IRS referrals and ensures timely feedback to IRS-examined institutions. Finalize data-access MOUs with SROs conducting BSA examinations, and states agencies conducting AML examinations that currently have no direct access to BSA data. We provided a draft of this report to the heads of the Departments of Justice and the Treasury; the Federal Reserve, FDIC, NCUA, OCC, OTS, IRS, SEC, and CFTC. We received written comments from FinCEN, IRS, and all the financial regulators. These comments are summarized below and reprinted in appendixes IV-XII. All of the agencies provided technical comments, which we incorporated into this report, where appropriate. In its comments, IRS agreed with our recommendation that the IRS commissioner work with state agencies to develop a process by which to coordinate BSA examination schedules. The agency said that actions to address our recommendation already were underway. In their written responses, all of the agencies agreed with our recommendation that they consider developing a mechanism or using an existing process to conduct regular, nonpublic discussions of BSA examination procedures and general trends to better ensure consistency in the application of BSA. In technical comments, some agencies asked that we be more specific about which component of their agencies should participate in and conduct these discussions. We modified the recommendation language to clarify that the heads of the agencies should direct appropriate staff to undertake these actions. The Federal Reserve commented that such discussions could build on improvements already made in examination processes and that regular discussion of examination procedures and general compliance trends could be beneficial. FDIC agreed that periodic meetings with all federal agencies responsible for BSA compliance could promote consistency and coordination in examination and enforcement approaches and help reduce regulatory burden. OCC commented that a number of groups and processes already existed for sharing information and collaboration and that they would continue to participate in these initiatives and look for opportunities to share their practices and observations. OTS commented that that they would collaborate and that the federal banking agencies and FinCEN have established a number of formal committees and working groups to promote collaboration on BSA issues. SEC agreed that the regulators would benefit from the development of such a mechanism and noted that it planned to attend a meeting in which FinCEN was planning to discuss possible methods for achieving this goal. CFTC commented that it supports all efforts to increase cooperation among regulators in the BSA area and that it would be pleased to participate in discussions that would allow the agency to share experiences and expertise in developing and implementing BSA examination procedures. In its comments, FinCEN said it concurred with the intent of our recommendations, particularly in regard to expanding information sharing with authorized stakeholders, and hoped to be situated in the future to meet them. The draft report that we sent to the agencies for comment contained a recommendation that FinCEN finalize information-sharing and data-access MOUs with CFTC. These MOUs were signed on January 15, 2009, so we have removed the recommendation from the final report. In its comments, CFTC noted that the MOUs had been signed and said that it believed these two agreements would enhance CFTC’s ability to effectively implement its BSA examination responsibilities. Through discussions with FinCEN officials and FinCEN technical comments, FinCEN provided us with additional information and data about our draft recommendation on IRS referrals. We subsequently broadened the recommendation language to clarify that FinCEN should work with IRS to develop a process to facilitate communication on referrals and ensure timely feedback to IRS-examined institutions. FinCEN and IRS said they agreed with this modification. Finally, in its comments, SEC also supported our recommendation that FinCEN finalize data-access MOUs with SROs that conduct BSA examinations. SEC noted its view that direct access to BSA data would permit FINRA to more effectively use its AML resources to take a more risk-based approach to identifying firms and areas within a firm’s AML program that required examination. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to interested congressional committees, Treasury, FinCEN, Federal Reserve, FDIC, OCC, OTS, NCUA, SEC, CFTC, IRS, and Justice. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or you staff have questions about this report, please contact me at (202) 512–8678 or edwardsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XIII. Our objectives were to (1) describe how Bank Secrecy Act (BSA) compliance and enforcement efforts are distributed among federal and state regulators, self-regulatory organizations (SRO), and the Financial Crimes Enforcement Network (FinCEN); (2) describe how federal agencies other than FinCEN are implementing their BSA activities and evaluate their coordination efforts; and (3) evaluate how FinCEN is executing its BSA responsibilities and coordinating BSA efforts among the various agencies. To describe how BSA compliance and enforcement efforts are distributed among federal regulators, SRO, and FinCEN, we reviewed and analyzed authorities established by BSA, the USA PATRIOT Act, and other relevant federal financial and anti-money laundering (AML) legislation. We also reviewed prior GAO and Department of the Treasury (Treasury) Inspector General reports on this issue. In addition, to better understand how BSA/AML authorities were delegated and interrelate with other financial regulatory authorities, we interviewed officials from the federal agencies included in the BSA/AML compliance and enforcement regulatory framework—FinCEN; the federal banking regulators: the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and National Credit Union Administration (NCUA); Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), and the SROs they regulate; Internal Revenue Service (IRS); and Department of Justice (Justice). To examine how entities with BSA/AML compliance and enforcement responsibilities implement their BSA activities and evaluate their coordination efforts, we reviewed prior GAO reports; available BSA/AML examination manuals and procedures; other related guidance; reports complied in accordance with FinCEN information-sharing memorandums of understanding (MOU); and data maintained on the numbers of the BSA/AML examinations, violations, and enforcement actions taken in the banking, securities, futures, and IRS-examined industries. Further, we conducted data reliability assessments of BSA/AML-related data and found the information to be reliable for the purposes of this report. In addition, we reviewed quality assurance reviews conducted by the federal banking regulators of their BSA/AML examinations. We interviewed officials from all of the federal agencies and their SROs mentioned above and also spoke with officials from select state financial regulatory agencies to obtain information on their BSA/AML compliance and enforcement activities and how these state agencies coordinate with federal agencies. We selected state regulators to interview on the basis of their geography, the presence of a High Intensity Financial Crime Area in their state, the size and variety of the financial sectors present in their state, the existence of a money services business (MSB) examination program in their state, and whether they were contacted by GAO for a previous BSA/AML-related GAO report in 2006. With respect to the federal banking regulators and their efforts to ensure BSA compliance among depository institutions, we reviewed the Federal Financial Institutions Examinations Council (FFIEC) BSA/AML interagency examination manual, and GAO staff attended 3 days of training on the manual provided to federal and state bank examiners. We also reviewed quarterly and annual reports which included data on examinations, violations, and enforcement actions, as well as information on staffing and training, that were submitted by the federal banking regulators to FinCEN per their MOU. We reviewed these reports to assess whether regulators were in compliance with MOU requirements and to inform our understanding of their BSA/AML compliance activities. In addition to meetings with federal banking regulator BSA/AML program staff, we also held interviews with groups of examiners from each of the federal banking regulators to discuss the manual and interagency coordination. We also spoke with a state banking regulatory association and credit union regulatory association. Further, to obtain industry perspective, in cooperation with another GAO team looking at the usefulness of suspicious activity reports (SAR), we interviewed two banking industry associations and 20 depository institutions on the impact of the manual and coordination among federal and state banking regulators. To select the 20 depository institutions, we grouped the depository institutions into four categories depending on the numbers of SARs filed in calendar year 2007. We interviewed representatives from all 5 institutions that had the largest number of SAR filings in 2007, as well as representatives from 15 randomly selected institutions. The 15 institutions represented different categories of SAR filings: small (1-4 SARs filed in 2007), medium (5-88), and large (more than 88—excluding the 5 largest). To obtain information on the BSA/AML compliance and enforcement activities of SEC, CFTC, and IRS, we interviewed officials from these agencies, as well as officials from securities and futures SROs; state regulatory agencies; securities and futures firms; and securities, futures, and money transmitter industry associations. We interviewed 8 securities firms through the auspices of an industry trade association and interviewed one large and small futures drawn from a list provided by a futures regulator. In addition, we reviewed available examination modules; related training guidance; and reports provided to FinCEN by SEC and IRS in accordance with their information-sharing MOUs that contain data on BSA/AML examinations, violations, and enforcement actions; as well as BSA/AML training and staffing information. We obtained and reviewed similar information from CFTC. To describe Justice’s enforcement actions, we interviewed Justice officials, analyzed Justice’s enforcement actions, and reviewed other BSA/AML-related Justice documentation. In order to evaluate coordination efforts, we compared the practices of these agencies with best practices outlined in a GAO report evaluating coordination practices among federal agencies. To evaluate FinCEN BSA/AML compliance and enforcement efforts, we collected and reviewed available staffing and performance measurement data from FinCEN, program assessments, BSA/AML-violation referral data from its Case Management System (CMS), FinCEN analytical products, strategic plans and annual reports, and other documentation. We also assessed the reliability of data provided to us by FinCEN from its CMS and found it to be reliable for the purposes of this report. In addition, we reviewed the three surveys FinCEN conducted of users of its Regulatory Resource Center in 2006, 2007, and 2008 and a fourth survey it conducted of regulators with which it has information-sharing MOUs. Despite some potential limitations associated with the surveys, we concluded that the overall frequencies for survey questions should be sufficiently valid and reflected the overall opinions of those surveyed. FinCEN officials also told us that information-sharing MOU survey respondents might have, in some cases, been providing responses to reflect their experiences with data- access MOUs. Further, we interviewed FinCEN officials from the Office of the Director, Management Programs Division, the Analysis and Liaison Division, and the Regulatory Policy and Programs Division (RPPD). We conducted interviews with staff from each of the offices within RPPD. In addition, we conducted interviews with officials from the federal banking regulators, SEC, CFTC, securities and futures SROs, IRS, and industry to discuss FinCEN’s efforts. We conducted this performance audit in Washington, D.C., New York, New York, and Chicago, Illinois, from October 2007 to February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides an overview of the compliance and enforcement activities of the federal financial regulators and IRS and provides information, to the extent it is available, on their BSA-related resources and training. The federal banking regulators (the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and National Credit Union Administration (NCUA)), Securities Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), securities and futures self-regulatory organizations (SRO), and Internal Revenue Service (IRS) play roles in implementing BSA/AML compliance. The U.S. regulatory system is described as “functional,” so that financial products or activities are generally regulated according to their function, no matter who offers the product or participates in the activity. Below is a discussion of their missions and how they undertake general compliance and enforcement activities within their industries. Depository institutions can generally determine their regulators by choosing a particular kind of charter—for example, commercial bank, thrift, or credit union. These charters may be obtained at the state level or the national level. While state regulators charter institutions and participate in oversight of those institutions, all of these institutions have a primary federal regulator if they have federal deposit insurance. Broadly, the federal banking regulators that provide oversight for banks are the Federal Reserve, FDIC, and OCC; thrifts—OTS; and credit unions— NCUA. Banking regulators generally focus on ensuring the safety and soundness of their supervised institutions. They conduct safety and soundness examinations on-site to assess an institution’s financial condition, policies and procedures, and adherence to laws and regulations. Generally, regulatory agencies perform these examinations every 12 to 18 months, based on the institution’s risk. The Federal Reserve, FDIC, OTS, and NCUA (but not OCC) alternate or conduct joint safety and soundness examinations with state regulators, generally using the same examination procedures. State banking regulators may examine depository institutions chartered within their jurisdictions. Federal and state banking regulators may address compliance problems identified through their examinations by bringing the problem to the attention of institution management and obtaining a commitment to take corrective action. When these actions are insufficient or weaknesses identified are more substantive, regulators may take nonpublic, informal enforcement actions. Informal actions (which vary among the federal banking regulators) may include the adoption of resolutions by an institution’s board of directors, the execution of a memorandums of understanding between an institution and the regulators, notices of safety and soundness deficiency for compliance, commitment letters, or corrective actions to be taken to address regulatory concerns. Informal actions usually are taken to address violations that are limited in scope and technical in nature. Federal banking regulators also may take formal enforcement actions if a depository institution is engaging in unsafe or unsound practices or has violated a law or regulation. Formal enforcement actions are public and generally considered more stringent than informal actions and can address more significant, repeated, or systemic BSA violations. Formal enforcement actions include cease-and-desist orders, assessments of civil money penalties (CMP), or supervisory agreements. These types of actions are enforceable through an administrative process or injunctive relief in federal district court. SEC’s mission is to protect investors; maintain fair, orderly, and efficient securities markets; and facilitate capital formation. SEC regulates the securities industry in part through oversight of its SROs. SEC, through its Office of Compliance and Examination (OCIE) shares examination responsibilities with securities SROs, which include examining for BSA/AML compliance. OCIE’s routine examinations are conducted according to a cycle that is based on a registrant’s perceived risk. In addition to routine examinations, OCIE also may conduct sweep examinations to probe specific activities of a sample of firms to identify emerging compliance problems so they can be remedied before becoming severe or systemic. Third, OCIE conducts cause examinations when it has reason to believe that something is wrong at a particular firm. SROs have statutory responsibilities to regulate their own members, and one SRO—the Financial Industry Regulatory Association (FINRA)— provides oversight of the majority of broker-dealers in the securities industry. SROs conduct risk-based examinations, which include a BSA component, of their members to ensure compliance with SRO rules and federal securities laws. These examinations are conducted on a risk-based cycle (similar to SEC’s), and no member is examined less frequently than every 4 years. Through oversight inspections of the SROs, OCIE evaluates the quality of the SROs’ oversight in enforcing member compliance. At regular intervals, OCIE conducts routine inspections of SROs’ key regulatory programs, such as SRO enforcement, arbitration, and examination programs. Inspection of enforcement programs typically includes a review of SRO surveillance programs for identifying potential violations of trading rules or laws, investigating those potential violations, and disciplining those who violate the rule or law. SEC and its SROs also have enforcement divisions that are responsible for investigating and prosecuting violations of securities laws or regulations as identified through examinations; referrals from other regulatory organizations; and tips from firm insiders, the public, and other sources. For less significant issues, examiners may cite a deficiency for correction through remedial actions. SEC and SRO examiners conduct exit interviews with firms, which are usually followed by letters discussing examination findings. SEC issues deficiency letters that formally identify compliance failures or internal control weaknesses at a firm. Most examinations conclude with the firm voluntarily correcting the compliance problem and stating the specific actions it is taking in its response to SEC. Potential SEC enforcement sanctions include disgorgement, CMPs, cease-and-desist orders, and injunctions. When SROs find evidence of potential violations of securities laws or SRO rules by their members, they can conduct disciplinary hearings and impose penalties. These penalties can range from disciplinary letters to the imposition of monetary fines to expulsion from trading and SRO membership. CFTC’s primary mission is to preserve the integrity of the futures markets and protect market users and the public from fraud, manipulation, and abusive practices related to the sale of commodity futures and options. While CFTC directly performs the market surveillance and enforcement functions, CFTC carries out its regulatory functions with respect to futures firms through SROs that act as the primary supervisor for members of the futures industry. CFTC does not routinely conduct direct examinations of the institutions that it supervises; instead, it oversees their SROs’—the National Futures Association (NFA), Chicago Mercantile Exchange, New York Mercantile Exchange, Chicago Board of Trade, and the Kansas City Board of Trade—examinations of futures firms. Each futures exchange is an SRO that governs its floor brokers, traders, and member firms. NFA also regulates every firm or individual that conducts futures trading business with public customers. SROs are responsible for establishing and enforcing rules governing member conduct and trading, providing for the prevention of market manipulation, ensuring futures industry professionals meet qualifications, and examining exchange members for financial soundness and other regulatory purposes. SROs examine their members for compliance with their rules, including those imposing BSA/AML requirements. The futures SROs’ examination cycles range from 9 to 18 months for futures commission merchants, but introducing brokers may have longer examination cycles. While CTFC does not conduct routine examinations of futures firms, it provides oversight of futures SROs to ensure that each has an effective self-regulatory program. CFTC’s Division of Clearing and Intermediary Oversight conducts periodic, risk-based examinations of an SRO’s compliance examination program, which may include BSA/AML issues. During the examination, CFTC reviews the SRO’s documentation of select examinations and independently performs examinations for the same periods to compare its results with those of the SRO’s examinations. SROs may take enforcement actions against any member that is in violation of member rules and CFTC regulations, which include BSA/AML- related rules. BSA/AML obligations for the futures industry are set forth in the USA PATRIOT Act, BSA, FinCEN and CTFC regulations, and SRO member rules. CFTC’s Division of Enforcement investigates and prosecutes alleged violations of the Commodity Exchange Act and CFTC regulations, and reviews SRO open investigations and enforcement actions. IRS is a bureau within Treasury, with the mission of helping taxpayers understand and meet their tax responsibilities and ensuring that all taxpayers comply with tax laws. Unlike others with BSA/AML compliance responsibilities, IRS does not conduct examinations of compliance with any legislation other than BSA/AML rules and regulations. FinCEN delegated BSA examination authority to IRS for any financial institution not subject to BSA examination by another federal regulator. These institutions are mainly nonbank financial institutions (NBFI) such as casinos, some credit unions, credit card operators, and approximately 200,000 money service businesses (MSB), which are the most numerous of the NBFIs. IRS’s Small Business/Self-Employed Division, which reports to the Deputy Commissioner of Services and Employment, conducts BSA compliance examinations of NBFIs. In 2004, IRS created the Office of BSA/Fraud within the Small Business/Self-Employed Division to better focus on BSA examinations of NBFIs. IRS’s BSA program also aims to increase the number of identified NBFIs, conduct outreach and education to NBFIs, and refer any NBFIs to the Financial Crimes Enforcement Network (FinCEN) or IRS Criminal Investigation for civil and criminal enforcement actions. IRS Criminal Investigation, IRS’s enforcement arm, investigates individuals and businesses suspected of criminal violations of the Internal Revenue Code, money laundering and currency crime, and some BSA laws. IRS Criminal Investigation usually investigates BSA criminal violations in conjunction with other tax violations. IRS Criminal Investigation’s first enforcement priority is tax fraud and tax evasion, but currency reporting and money laundering enforcement also are areas of emphasis. The federal banking regulators, SEC, and CFTC incorporate their BSA activities into their overall compliance programs. However, all the regulators either track the number of hours spent on BSA/AML issues or numbers of staff with BSA/AML-related responsibilities. All of the regulators have staff that examine institutions for BSA/AML compliance concurrently with their comprehensive safety and soundness compliance examinations. The points below summarize BSA/AML-specific data (for 2008 where possible) for each regulator (IRS excepted): Federal Reserve. The Federal Reserve has a BSA/AML Risk Section within its Division of Banking Supervision and Regulation, which consists of seven staff who monitor BSA/AML compliance concerns and liaise with staff from Federal Reserve Banks to provided guidance on BSA/AML issues. Federal Reserve officials said they also have BSA/AML specialists located in some Federal Reserve Banks. FDIC. In 2008, of the 1,680 examiners that conduct safety and soundness examination (during which a BSA/AML examination is conducted concurrently), 324 were BSA subject matter experts, and 117 are certified AML specialist examiners. Further, FDIC officials estimated the agency devoted 107.4 and 103.5 full-time equivalent positions to BSA/AML activities in 2006 and 2007, respectively. OCC. OCC has a Director for BSA/AML Compliance that oversees a staff of six full-time BSA/AML compliance specialists in its headquarters. From 2005 through 2007, OCC officials estimated that the agency annually devoted an average of 105 full-time equivalent positions to the BSA, while in 2008, OCC devoted approximately 86 full-time equivalents. OTS. In 2008, OTS reported that five Regional Assistant Directors for Compliance serve as subject matter resources on BSA, in addition to 15 regional compliance specialists, and 2 national office staff that are dedicated to BSA/AML issues. OTS officials estimated the time its attorneys devoted to BSA/AML issues as being equivalent to two full-time positions. NCUA. As of September 30, 2008, NCUA reported employing 514 examiners, which included 31 examiners designated as consumer compliance subject matter examiners (which includes BSA/AML issues). Each of NCUA’s five regional offices has at least one BSA/AML analyst, its Office of Examination and Insurance has two BSA/AML program officers, and the Office of General Counsel has two attorneys that focus on BSA issues. SEC. SEC has a BSA/AML team comprised of from five to seven OCIE staff members, from three to five Division of Enforcement staff members, and three members from the Division of Trading and Markets. The team is responsible for monitoring its BSA/AML examination program; providing expertise to regional offices; and maintaining communication with FinCEN, the SROs, and other regulators on AML issues. Further, SEC broker-dealer examination staff have an AML working group consisting of one or more representatives from each regional office, who serve as AML experts. FINRA has nine AML regulatory experts. CFTC. CFTC does not have full-time staff dedicated solely to BSA/AML compliance; however, various staff may be involved in BSA/AML issues. CFTC staff conduct periodic oversight examinations of SROs’ compliance examination programs, which include a review of BSA/AML procedures. CFTC staff also devote time to BSA/AML policy issues during the rule- making process and at other times, as requested by FinCEN. Futures SROs include BSA/AML as part of their broader compliance examination programs. NFA and the Chicago Mercantile Exchange have 130 and 59 examination staff respectively, all of which have been trained in BSA/AML. All of the regulators and their SROs that examine financial institutions for BSA/AML compliance provide opportunities to their staff to receive BSA/AML training—provided by the agency, working groups (such as FFIEC), or outside vendors. FFIEC, for example, provides both an AML workshop for examiners knowledgeable of BSA and experienced in examining institutions for BSA program compliance and, as of 2007, an advanced BSA/AML specialists conference for designated BSA compliance examiners and other BSA subject matter experts. In 2007, over 400 trainees participated in these programs. Agencies and SROs provided several examples of BSA/AML training available to their staff and others (see table 11). Unlike the federal banking regulators, SEC, and CFTC, who incorporate BSA activities into their compliance programs, IRS’s BSA/AML activities are managed separately in its Office of Fraud/BSA within the Small Business/Self Employment division. This office is solely dedicated to examining NFBIs for BSA compliance. Since IRS created the office, IRS has tracked several BSA-specific output and efficiency performance measures, such as number of examinations, referrals, closures, and hours per case (see table 12). IRS also has a detailed strategic plan devoted to BSA compliance and enforcement activities. We previously reported that IRS lacked a measure for NBFI compliance rates with BSA and thus could not track program effectiveness over time. We recommended that the Secretary of Treasury direct FinCEN and IRS to develop a documented and coordinated strategy—that would include priorities, time frames, and resource needs, and measure the compliance rate of NBFIs—to improve BSA compliance by NBFIs. IRS and FinCEN responded by developing such a strategy, which identifies various NBFI categories, prioritizes actions to be taken overall and within each category for improving BSA compliance, explains who is responsible for the actions, and establishes the time frames for identifying whether an action has been completed or when it is to be completed. Similar to the other regulators, IRS’s Office of BSA/Fraud conducts quality reviews of examinations. Over the last several years, IRS has increased the resources it devotes to BSA compliance. In fiscal year 2007, IRS spent over $71 million and 700 full-time equivalents on BSA-related activities, which is an increase of 3 percent and 5 percent, respectively, from 2006. Specifically, the Small Business/Self Employment’s Office of Fraud/BSA increased its BSA field examiner staff from 372 in 2006 to 385 in 2007. New Small Business/Self Employment employees receive Basic BSA/AML training on both BSA and currency transaction reporting requirements (Form 8300 examinations). Experienced BSA examiners receive specialized training for specific industries, such as insurance companies, credit unions, casinos, and jewelry and precious metals dealers. IRS also has developed specific BSA training for managers and coaches of BSA examiners. The Office of Fraud/BSA also distributes a BSA/AML examination guide, provides BSA newsletters, and updated the Insurance Industry Guide and Internal Revenue Manual. In fiscal year 2008, approximately 70 BSA/AML-related formal enforcement actions were taken by federal financial regulators--the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), Securities Exchange Commission (SEC)--the National Futures Association (NFA), the Financial Industry Regulatory Authority (FINRA), and other self-regulatory organizations (SROs). In fiscal years 2006-2008, the Financial Crimes Enforcement Network (FinCEN) and the federal financial regulators and SROs jointly assessed 11 civil money penalties (CMP). Table 13 contains examples of formal enforcement actions, excluding CMPs, that were not taken concurrently with FinCEN. Table 14 lists examples of BSA/AML-related CMPs issued: (1) jointly by federal and state regulators, SROs, and FinCEN; (2) solely by FinCEN; and (3) by federal regulators only. In addition to the contact named above, Barbara I. Keller (Assistant Director), Allison M. Abrams, M’Baye Diagne, John P. Forrester, Kerstin Larsen, Carl Ramirez, Barbara M. Roesmann, Ryan Siegel, and Paul Thompson made key contributions to this report.
The legislative framework for combating money laundering began with the Bank Secrecy Act (BSA) in 1970 and most recently expanded in 2001with the USA PATRIOT Act. The Financial Crimes Enforcement Network (FinCEN) administers BSA and relies on multiple federal and state agencies to ensure financial institution compliance. GAO was asked to (1) describe how BSA compliance and enforcement responsibilities are distributed, (2) describe how agencies other than FinCEN are implementing those responsibilities and evaluate their coordination efforts, and (3) evaluate how FinCEN is implementing its BSA responsibilities. Among other things, GAO reviewed legislation, past GAO and Treasury reports, and agreements and guidance from all relevant agencies; and interviewed agency, association, and financial institution officials. FinCEN is responsible for the administration of the BSA regulatory structure, and has delegated examination responsibility to the federal banking regulators (Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, Office of Thrift Supervision, and National Credit Union Administration), the Securities and Exchange Commission (SEC), the Commodity Futures Trading Commission (CFTC), and the Internal Revenue Service (IRS). The federal banking regulators, SEC, CFTC, securities and futures self-regulatory organizations (SRO), and state agencies also have their own separate authorities to examine for compliance among institutions they supervise and take enforcement actions for noncompliance. FinCEN has retained enforcement authority for BSA and may take enforcement actions independently or concurrently with the regulators. While federal agencies have enhanced their BSA compliance programs, opportunities exist to improve interagency and state examination coordination. The federal banking regulators issued an interagency examination manual; SEC, CFTC, and their respective SROs developed BSA examination modules; and FinCEN and IRS, which examines nonbank financial institutions (NBFI), issued an examination manual for money services businesses (MSB). However, IRS has not fully coordinated MSB examination schedules with the states that also examine MSBs, potentially missing opportunities to reduce duplication and leverage resources. The federal financial regulators traditionally have different compliance approaches for their industries. With respect to BSA, multiple regulators are examining for compliance with the same legislation across industries and, for some larger holding companies, within the same institution. However, they do not have a mechanism through which all regulators discuss (without industry present) how to promote greater consistency, reduce unnecessary regulatory burden, and identify concerns across industries. Federal banking regulators reported improved transparency and coordination of enforcement actions. While FinCEN has increased regulatory resources, provided examination support, and made advances in outreach, it could improve its informationsharing efforts. FinCEN improved its system for tracking referrals but lack of a process for communication between IRS and FinCEN for IRS referrals, coupled with IRS's limited enforcement authority, may delay timely feedback to IRS-examined institutions. FinCEN completed more information-sharing memorandums of understanding (MOU) with federal and state agencies, but did not sign its MOU with CFTC until January 2009, which limited their information-sharing efforts. Some state regulators and securities and futures regulators continue to have no electronic access to BSA data. Lack of direct access to BSA data impedes their ability to identify potential risk areas on which to focus their examinations and effectively leverage resources. FinCEN officials said they finalized a data-access template in July 2008, and had begun providing more electronic access.
Congress has assigned to Treasury the responsibility of borrowing the funds necessary to finance the gap between the money that the government receives, primarily tax revenues, and the money that the government spends. Government expenditures include regular withdrawals for programs such as Medicare and Social Security as well as extraordinary withdrawals for programs such as TARP. Treasury also makes interest and principal payments for outstanding debt and debt that is maturing on a continual basis. Treasury’s primary debt management goal is to finance the government’s borrowing needs at the lowest cost over time, subject to a statutory limit. To meet this objective, Treasury issues debt through auctions across a wide range of securities mainly in a “regular and predictable” pattern based on a preannounced auction schedule, which it releases on a quarterly basis. Treasury does not “time the market”—or take advantage of low interest rates—when it issues securities. Instead, Treasury is able to lower its borrowing costs by relying on regularly scheduled auctions because investors and dealers value transparency, stability, and certainty of large liquid supply. Market participants often characterize Treasury securities as the premium risk-free asset. Investors, traders, banks, and foreign central banks actively use them for hedging, liquidity, capital requirements, and reserve purposes. Treasury securities are also a popular investment for end- investors seeking liquidity and low risk. Treasury’s “regular and predictable” auctions are for nominal marketable securities that range in maturity from 4 weeks to 30 years and for TIPS that are issued with 5-, 10-, and 30-year maturities. TIPS offer a variety of benefits to Treasury, and inflation protection to investors, who are willing to pay a premium for this protection in the form of an interest rate on TIPS that may be lower than a comparable nominal issuance over the life of the instrument. Treasury responds to increases in borrowing needs in a traditional manner by: (1) increasing the issuance size of existing securities; (2) increasing the frequency of issuances; and (3) introducing new securities to its auction calendar as necessary. Treasury announces upcoming changes during quarterly refundings so that the market is not surprised. In some instances, Treasury supplements its “regular and predictable” auction schedule with flexible securities called cash management bills (CMB). Because of the nature of CMBs, Treasury does not publish information about CMBs on its quarterly auction schedule as it does for other securities. Instead, Treasury announces CMB auctions anywhere from 1 to 4 days ahead of the auction. Treasury also indicates whether it might issue CMBs over the upcoming quarter in quarterly refunding statements. The term to maturity—or length of time the CMB is outstanding—varies according to Treasury’s cash needs. Treasury generally uses CMBs to finance intramonth funding gaps due to timing differences of large cash inflows and outflows. Treasury also uses CMBs to meet sudden and unexpected borrowing needs, such as those that arose from the government’s responses to the financial market crisis and economic downturn in 2008 and 2009. The outstanding mix of Treasury securities can have a significant influence on the federal government’s interest payments. Longer-term nominal securities typically carry higher interest rates (which translate to increased cost to the government), primarily due to investor concerns about the uncertainty of future inflation. However, longer-term securities offer the government the certainty of fixed interest payments over a longer period and reduce the amount of debt that Treasury needs to refinance in the short term. In contrast, shorter-term securities generally carry lower interest rates but add uncertainty to the government’s interest costs and require Treasury to conduct more frequent auctions to refinance maturing debt, which also poses rollover risk. Among Treasury’s short-term securities, those that are issued on a “regular and predictable” schedule generally carry the lowest interest rates. Two groups, (1) the primary dealers and (2) the Treasury Borrowing Advisory Committee (TBAC) of the Securities Industry and Financial Markets Association (SIFMA) provide regular input to Treasury debt management decisions. The primary dealers are a group of banks and securities broker/dealers, selected by the Federal Reserve Bank of New York (FRBNY), that trade in U.S. government securities with the FRBNY on behalf of the Federal Reserve in order to implement monetary policy. They are also required by the FRBNY to participate in all Treasury auctions. On a quarterly basis, Treasury surveys the primary dealers and also meets with half of them in person. Treasury also meets quarterly with TBAC, an advisory committee that is governed by federal statute and comprised of senior level officials who are employed by primary dealers, institutional investors, and other major participants in the Treasury market. Treasury also monitors market trends via regular contact with the Markets Group at FRBNY, subscriptions to all major investment houses’ fixed income research publications, attending fixed income conferences, and meeting with large foreign investors and reserve managers. The borrowing associated with the actions that the federal government took in response to the financial-market crisis and recession including TARP, the SFP, and the Recovery Act, substantially altered the size and composition of Treasury’s outstanding debt portfolio. Since the onset of the recession in December 2007, Treasury’s total outstanding debt has increased by $3.082 trillion, and marketable debt increased by $2.735 trillion. At the end of December 2009, total outstanding debt was $12.311 trillion, and total outstanding marketable securities stood at $7.272 trillion. According to Treasury, in fiscal year 2009, Treasury held a record 291 auctions in 251 business days and issued nearly $7 trillion in gross marketable securities, a significant portion of which was used to roll over, or refinance, existing debt. The mix of securities Treasury issued in 2008 and 2009 substantially shortened the average maturity of its debt portfolio and increased the debt maturing in the next 12 months. As seen in figure 1, when looking at Treasury’s outstanding marketable securities during the period December 31, 2006, to December 31, 2009, the percentage of securities maturing within a year peaked in December 2008. Reflecting the same trend, the average term to maturity of outstanding marketable securities reached its lowest point of 49 months in December 2008. As we reported in September 2009, these changes were in accordance with what Treasury described to us as its normal operating procedures. Our September report included specific details about Treasury’s debt issuance between December 2007 and June 2009. The changes to Treasury’s debt portfolio, as discussed above, were not intended to be permanent, and Treasury has already started to transition back to pre-financial-market crisis levels of average maturity and composition of the debt portfolio in a manner that, according to Treasury, was as rapid and as prudent as possible. During the November 2009 TBAC press conference, Treasury officials announced that the transition has begun with a shift of bill issuances to nominal note and bond issuance and TIPS issuance. This shift will allow Treasury to retain flexibility in meeting uncertain financing needs in the future. Flexibility is retained by increasing the borrowing capacity that Treasury has available for shorter- term securities, which are used when unexpected financing needs arise. During the February 2010 TBAC press conference Treasury indicated a shift in the transition with the announcement that nominal note and bond issuance will stabilize in the next year and perhaps even decrease. In February, Treasury stated that nominal auctions sizes were at levels that give Treasury the flexibility to address a broad range of potential financing scenarios. Market participants we spoke with anticipated the stabilization of note and bond issuance, but cautioned that any decrease in the amount of nominal note and bond issuance would depend on tax receipts. Treasury has said that it expects the average term to maturity of outstanding marketable debt to approach the historical average of 5 years (or 60 months) by the end of fiscal year 2010 and could perhaps exceed it in the next 3-5 years. Treasury officials have indicated the changes they are making to the overall debt portfolio will bring short-term bill levels closer to historical averages while stabilizing or perhaps even decreasing nominal note and bond issuance. Treasury has emphasized the importance of making these changes in a gradual, transparent, and incremental manner. Some market participants have expressed concern about a reduction in bill supply. Investors use bills to invest their funds temporarily in a safe and highly liquid asset. Bills are also used by institutional investors that are required to buy financial assets maturing in a year or less. Treasury recognizes the importance of adequate bill supply and said that it will continue to monitor the bills market for any disruptions that the decrease in bill supply may cause. Shortly after the start of the financial-market crisis in the fall of 2008, Treasury borrowed an unprecedented $1.1 trillion in under 18 weeks largely by issuing CMBs, which are intended for unexpected and immediate cash needs. Treasury’s use of CMBs was substantial and continued well after the beginning of the financial-market crisis. The sustained increase was due in part to the Supplementary Financing Program (SFP), a temporary program created in September 2008 to provide cash for use in Federal Reserve initiatives intended to address heightened liquidity pressures in the financial markets. In 2008 and 2009, Treasury’s gross issuance of CMBs was $1.432 trillion and $1.142 trillion respectively (of which $785 billion and $835 billion were issued for the SFP in 2008 and 2009). This compares to average issuance of about $254 billion annually from 2005 to 2007. (See fig. 2.) To issue $1.432 trillion worth of CMBs in 2008, Treasury held 47 auctions (of which 21 were issued for the SFP), compared to an average of 18 auctions annually from 2005 to 2007. CMBs that were issued in 2008 and 2009 also departed from historical norms in that their terms to maturity increased significantly. Prior to 2008, Treasury typically used CMBs to fund intramonth funding gaps and, in certain instances, to provide Treasury borrowing flexibility when it was approaching the debt limit. Between 2002 and 2007, CMBs typically had a term to maturity of less than 2 weeks. During 2005, 2006, and 2007, the average term to maturity of CMBs was 10 days, 9 days, and 10 days respectively. In contrast, in 2009, the average term to maturity of CMBs was 109 days or 15.6 weeks. Removing those CMBs that were used for the SFP (debt issued for the SFP does not pay for government expenditures), the average term to maturity of the remaining CMBs was 99 days in 2008 and 198 days in 2009. During its February 2008 quarterly refunding process, Treasury announced its plans to issue longer-dated CMBs. This was a change to Treasury’s recent practice of not issuing CMBs with maturities greater than 21 days and according to Treasury was necessary in order to spread the extraordinary financing needs away from the front end of the bill market. Treasury stated that longer-dated maturities would be issued because of seasonal fluctuations in cash balances, volatility associated with the timing of tax refunds, and the increased use of electronic payments versus check payments. On February 13, 2008, Treasury auctioned a 63-day CMB, which had a longer maturity than any other CMB issued in the previous 3 fiscal years. Treasury issued additional CMBs with terms to maturity of greater than 300 days during both fiscal years 2008 and 2009. Longer-dated CMBs were also, in many instances, reopenings of existing Treasury bills. Twenty of the 37 non-SFP CMBs issued in 2008 and 2009 were reopenings of outstanding Treasury bills. Treasury officials told us that they consulted with market participants and decided that longer-dated CMBs, for example 9-month bills, were a prudent, short-term mechanism to raise cash and approximately the length of time that it would take for coupon issuance to “catch up” and shoulder a bigger share of Treasury’s financing needs. While CMBs provided Treasury with needed borrowing flexibility immediately following the start of the financial market crisis in 2008, Treasury paid a premium for its sustained use of CMBs in 2008 and 2009. We reported in 2006 that Treasury had paid a premium for its use of CMBs during the period of 1996 to 2005. During that period, Treasury paid a higher yield on most CMBs than outstanding Treasury bills of a similar maturity paid in the secondary market. In the low-interest-rate environment during 2008 and 2009, all debt, but particularly short-term debt, was relatively inexpensive for Treasury; however, since the dollar amount of CMBs issued in 2008 was 5.6 times greater than the amount issued in 2007, even a small premium could be costly. Our analysis shows that of the 37 CMBs not issued for the SFP in 2008 and 2009, most had a higher yield when compared with outstanding Treasury bills of a similar maturity in the secondary market. The difference between these CMB yields and similar maturing outstanding bills—known as the yield differential—was positive for the second half of 2008 and all of 2009, averaging 2.7 basis points higher (or $184 million based on the amount issued) than outstanding bills of a similar maturity. CMBs play an important role in Treasury debt management, and it is likely that Treasury will always need to use CMBs, but Treasury could achieve savings by limiting the amount of CMBs it issues. Treasury has already begun its transition out of CMBs that are not linked to the SFP. As part of that transition, it has extended the average term to maturity of outstanding marketable securities by stabilizing short-term debt issuances and transitioning to nominal note and bond issuances. In February 2010, Treasury officials said that they planned to stabilize nominal note and bond issuance in the first half of 2010 and perhaps reduce nominal note and bond issuance in the second half of 2010. As of September 2009, 28.5 percent of Treasury’s debt portfolio was in bills. If Treasury does not alter its current pattern of issuance, Treasury projects this share will decline to 19 percent by September 2010 and to 16 percent by September 2011. Continuing to transition out of CMBs could reduce Treasury’s borrowing costs, increase Treasury’s borrowing capacity on the short end of the yield curve, and extend the average term to maturity of the debt portfolio. The actions that Treasury has taken to increase borrowing in response to the recession and financial-market crisis take place within the context of the already-serious longer-term fiscal condition of the federal government. As seen in figure 3, the Congressional Budget Office (CBO) projects that under the President’s fiscal year 2011 budget proposals, the debt held by the public will increase from $9.2 trillion in fiscal year 2010 to $20.3 trillion in 2020. Over this same period, CBO projects that debt held by the public will increase from 63 percent of gross domestic product (GDP) in fiscal year 2010 to 90 percent by the end of fiscal year 2020. Our long-term simulations show growing deficits and debt, underscoring that the long- term fiscal outlook is unsustainable. According to CBO, interest rates and the size of debt held by the public will increase in the medium term, leading to higher interest costs for the government. One way to measure the affordability of debt held by the public is to compare interest payments with expected revenues. As seen in figure 4, according to CBO, net interest payments as a percentage of total revenues will increase from 9.9 percent in fiscal year 2010 to 20.7 percent in fiscal year 2020. Treasury says its existing suite of securities will leave Treasury well- positioned to meet federal government borrowing needs in fiscal year 2010. Looking beyond 2010, sustained increases in debt in the medium and long term mean that communication with all types of investors to accurately gauge market demand will become increasingly important for Treasury. Sufficient information from market participants, including their likely demand for Treasury securities, is critical for debt management decisions. Treasury receives market information through multiple formal and informal channels. (See fig. 5.) Formal communication channels are quarterly meetings with TBAC and with the primary dealers held as part of Treasury’s quarterly refunding process. TBAC is currently comprised of primary dealers, investment managers, hedge funds, and a small broker dealer. According to Treasury officials, TBAC was once more weighted towards primary dealers than it is now. Buy-and-hold investors of Treasury securities are currently underrepresented. TBAC quarterly meetings serve as a forum for Treasury officials to discuss economic forecasts and the federal government’s borrowing needs with knowledgeable market participants. Treasury officials pose questions on specific debt management issues in advance and TBAC members present their observations to Treasury on these issues and economic conditions. While TBAC meetings are closed due to the sensitivity of the matters under discussion, Treasury releases TBAC meeting minutes at a press conference 1 day after each meeting and announces the details of its quarterly refunding and any changes to its auction calendar or to debt management policies. Treasury officials told us that Treasury seeks to promote market stability by reserving the release of any new information for the formal quarterly announcements. Treasury also surveys all 18 primary dealers quarterly and meets with half of them one quarter and the other half the following quarter. Primary dealers are those banks and securities broker-dealers that are designated by FRBNY and maintain active trading relationships with FRBNY. Primary dealers are also required by FRBNY to participate in all Treasury auctions. Primary dealers account for a majority of purchases at auction, some of which they purchase for themselves and some of which they purchase for their customers. Treasury meets with half of the primary dealers before each quarterly refunding to obtain estimates on borrowing, issuance, and the federal budget deficit, as well as input on a variety of debt management discussion topics, posed in advance. The only information about these meetings that is released to the public is the agenda. Treasury officials also receive information from FRBNY’s Markets Group, which has approximately 400 staff engaged in market surveillance. FRBNY provides morning and afternoon briefings, hosts a daily afternoon conference call, and provides a daily report on delivery fails in the secondary market for Treasury securities. FRBNY will also conduct specific market research at the request of Treasury. According to Treasury officials, the Office of Debt Management (ODM) relies on FRBNY for some of its market information. FRBNY is able to carry out large data- collection operations because of its greater resources, which supplements market data Treasury already collects. In addition to its formal communication with the market, Treasury continually collects information through informal channels, but this communication is not conducted or logged in a systematic manner. ODM’s informal communication includes both ad hoc and regular telephone and e-mail contact between six ODM officials and staff and approximately 500 foreign and domestic financial organizations. Treasury also has seven market-room staff who maintain continuous contact with market participants. Treasury also maintains regular informal contact with representatives of foreign central banks. In addition, Treasury regularly contacts primary dealers to discuss operational issues in the Treasury debt market as well to gather information about what they expect to occur in the Treasury debt market on a given day. Treasury staff and officials also reach out to investors by speaking at and attending conferences sponsored by market participants and meeting with large investors globally. Responses to our survey of the largest domestic holders of Treasury securities indicate that their views vary on the extent to which Treasury receives sufficient information and input from end-investors. Overall, survey responses suggested room for improvement in Treasury’s practices for gathering market information. Our survey asked respondents the extent to which they believed Treasury receives sufficient information and input from end-investors. They were presented with five response categories that included very great extent, great extent, moderate extent, some extent, and little or no extent, as well as a no basis to judge response choice. Seventeen of the 38 respondents who answered this question on our survey (see fig. 6), answered either some extent or little or no extent. This compares with only 10 respondents who answered very great extent or great extent. “No basis to judge” responses have generally been excluded from our totals except in cases where large numbers of respondents gave this response. over $201 billion in Treasury securities. The commercial-banking sector held $125 billion. In contrast to the mostly positive responses of mutual funds and commercial banks, respondents from the remaining sectors—life insurance companies, property casualty insurance companies, and state and local government retirement funds—were more likely to respond negatively. As shown in figure 8, 12 of 20 respondents from life insurance companies, property casualty insurance companies, and state and local government retirement funds answered some or little or no extent when asked whether they believe Treasury currently receives sufficient information from end-investors. Both of the life insurance companies that completed our survey chose little or no extent. Treasury officials have agreed that they could receive better input from end-investors and have made it a priority to improve investor outreach. The survey findings were consistent with information we received during interviews with investors conducted in June 2009 that indicated that many investors in liability-driven sectors, such as life insurance and pension funds, both lack formalized means of communication with Treasury and believe such contact would be beneficial. These investors may have a different demand portfolio than those in other market sectors with whom Treasury maintains closer contact. For example, there may be greater interest in these sectors in buy-and-hold securities like TIPS. With deb levels predicted to continue to rise in the medium and long term, the importance of good information from a range of investors in all sectors increases in importance. Respondents to our survey of the largest domestic holders of Treasury securities suggested ideas for improving Treasury’s collection of information from end-investors. The most frequently suggested ideas involved increasing the range of investors from whom Treasury obtains information. Survey respondents told us that they thought Treasury could better gauge market demand for securities if a broader range of investors were represented on TBAC. Survey respondents suggested changes su ch as broadening membership or rotating membership more frequently. Multiple survey respondents told us that some types of end-investors, particularly liability-driven investors such as insurance companies and pensions fund to Treasury. s, have limited formal means of communicating their views Survey respondents also suggested that Treasury could better gauge market demand through a periodic collection of market data from a broa range of end-investors. They suggested that the periodic data collection could be in the form of a survey, interviews, focus groups, or addition al data reporting by market participants. These responses echoed wh at market experts told us, that Treasury could benefit from periodic “temperature-taking” of the market through surveys or interviews and from changes to the organization or composition of the groups from which Treasury routinely receives market information and advice. Several surveyrespondents told us that a good model for a future Treasury survey might resemble the survey we conducted. While Treasury has not conducted a survey of end-investors in the by organizations like SIFMA. past, similar surveys have been conducted Treasury staff and officials agree that more inclusive representation on TBAC would be desirable, but they also said that increasing the number of members (to even the TBAC charter limit of 20 members) could impede optimal committee functioning. Treasury staff told us that if the committee were to become too large, it might be difficult to allow enough time for members to provide feedback and contribute to discussions. Treasury staff and officials told us that they could broaden TBAC membership to include one or more representatives of buy-and-hold investors such as insurance companies or endowments. Treasury staff and officials also told us that one of Treasury’s priorities is to improve investor outreach and t information more systematically. Treasury officials told us that improvements to how Treasury communica be a priority for ODM in 2010 and beyond. As previously noted, one challenge for Treasury will be to gauge inve demand for Treasury securities in order to finance historically large deficits expected in the medium and long term. Faced with this challenge, communication with investors becomes essential. When we surveyed major domestic holders of Treasury securities in August 2009, many survey participants indicated that their demand could increase for TIPS. As seen in table 1, as of July 31, 2009, survey respondents reported holding $143 billion in TIPS—which represented approximately 26 percent of the total marketable TIPS outstanding. This amount also constituted approximately 21 percent of the survey respondents’ total portfolio of Treasury securities. This share allocated to TIPS may indicate that our survey respondents already viewed TIPS favorably. According to Treasury data, TIPS generally represent a much smaller percentage of total outstanding Treasury securities. At the time of our survey in August 2009, TIPS constituted only 8 percent of all Treasury marketable securities outstanding. stor The increased investor interest in TIPS, as reported through our survey corroborates information we received from individu conducted earlier with large domestic holders of Treasury securities. The investment managers we interviewed at public and private pension fund s, mutual funds, insurance companies, and commercial banks expressed continued or growing interest in TIPS during 2009. At the start of 2009, financial-market experts were recommending that investors purchase TIPS and other inflation-protected investments. Over the course of the year, mutual funds began reporting large inflows into inflation-protected funds, which consist mostly of TIPS. During 2009, the five largest inflation-protected bond mutual funds increased their total net assets by almost 70 percent. The largest of these funds saw its net assets increase by an average of almost $1 billion per month in 2009. Also during 2009, one of the largest fixed income managers introduced three new mutual funds designed to protect investors against inflation. One o new funds is intended to provide a hedge against inflation but also pro tax-efficient income by allocating at least half of its investments to municipal bonds. The other two new funds are intended to produce monthly income payments that consist of both inflation-adjusted interest and principal. These two funds consist primarily of investments in TIPS and have initial target maturity dates of 2019 and 2029. GAO and others have recommended that Treasury take action to improve the liquidity of TIPS, which could lower Treasury’s cost of borrowing. Prior to 2009, holdings of Treasury securities by sectors that we surveyed had been in decline for nearly two decades. (As seen in fig. 9.) By the onset of the financial crisis in 2008, the share of Treasury securities relative to each sector’s total assets was less than half their historical averages for the preceding two decades. By the end of 2007, no sector reported holding more than 5-½ percent of its total assets in Treasury s ecurities. In 2009, Treasury decided to increase TIPS issuance, reversing the trend of the past few years. As we previously reported, Treasury reduced the annual gross amount of TIPS issuance by 19 percent from 2006 to 2008. Treasury then gradually increased total TIPS issuance in 2009 by 4 percent to $58 billion. During the August 2009 TBAC press conference, Treasury officials stated that they are committed to the TIPS program and to issuing TIPS in a regular and predictable manner across the yield curve. Further, during the November 2009 and February 2010 TBAC meetings, Treasury officials announced that they planned to gradually increase TIPS issuance and would consider making changes to the TIPS auction calendar by increasing the number of TIPS auctions. These changes, which are meant to improve TIPS liquidity, are based on Treasury’s own analysis and on input that Treasury received from market participants and GAO. At the time of this report, Treasury had already begun to increase TIPS issuance. The size of the 10-year TIPS auction held in January 2010 was $10 billion— an increase of 25 percent over the previous 10-year TIPS auctions that held in July 2009. If investors continue to express and demonstrate interest in TIPS, Treasury may be able to issue a greater amount of TIPS at a lower cost than in past years. Survey respondents who anticipated a change in their demand for TIPS said that any reallocation into TIPS would most likely be drawn from holdings of nominal Treasury securities or non-Treasury assets. Investments into TIPS were less likely to come from an overall increase in total assets. As previously reported, if Treasury has to increase the supply of nominal securities substantially to fund larger deficits, yields may have to rise in order to attract enough buyers due to the saturation of the nominal Treasury market. Therefore, issuing TIPS may make sense since a substantial shift in the composition of Treasury issuance into TIPS from nominal Treasuries could also lead to lower interest rates paid on the remaining nominal Treasury issuance. The most common reasons cited by our survey respondents for this specific anticipated shift into TIPS were inflation protection and TIPS’ valuation relative to other investments—the same reasons most often cited for a general interest in TIPS. Compared to other sectors that we surveyed, mutual fund companies and state and local government retirement funds also responded that some of their investments in TIPS were dedicated based upon active allocation decisions made by clients. Treasury has also responded to investor concern about the maturity of TIPS issued across the yield curve by reintroducing the 30-year TIPS. At the November 2009 TBAC meeting, there was general consensus to eliminate the 20-year TIPS and replace it with the 30-year TIPS. TBAC members thought this change may allow Treasury to lower its cost of borrowing while it would create a TIPS issue that could be better compared to the 30-year nominal issuance point. Following the TBAC meeting, Treasury announced that it would discontinue the auctions of the 20-year TIPS and reintroduce the 30-year TIPS starting in February 2010. As we reported previously, investors demand a premium for less-liquid TIPS, which increases Treasury’s borrowing costs. Through our survey, market participants identified a number of options to improve participation at TIPS auctions, which could improve TIPS liquidity. Most respondents to our survey were more likely to purchase TIPS in the secondary market rather than at auction. The most common reasons listed for this were infrequency of TIPS auctions, portfolio needs, relative valuation, and liquidity. On average, survey respondents planned to purchase almost 80 percent of their TIPS in the secondary market. Over half of survey respondents said that although they never participate in Treasury auctions, they were active in the secondary market at least monthly. Survey respondents said that increasing the dollar amount of TIPS issued per auction and increasing the frequency of TIPS auctions could help improve participation during TIPS auctions. Survey respondents also pointed out that a clearer commitment from Treasury to the TIPS program would improve TIPS liquidity. In interviews with us in February 2010, some primary dealers said that Treasury should modify its current TIPS auction schedule to decrease the amount of time between TIPS auctions, thereby staggering the supply of TIPS so that issuance is not as concentrated. Since 2005, Treasury has held eight TIPS auctions every calendar year—two auctions each in January, April, July and October. At the May 2010 TBAC press conference, the Assistant Secretary for Financial Markets, Mary Miller, said that Treasury will be adding a second reopening of the 10-year TIPS, which would lead to six 10-year TIPS auctions a year. According to Treasury, these changes would help improve TIPS liquidity while diversifying its funding sources. The combination of increased TIPS issuance, Treasury’s statements of commitment to TIPS, and the reintroduction of the longer-dated 30-year TIPS, could help sustain a viable TIPS futures market. In interviews and in published material, some financial-market experts have noted the lack of a viable futures trading market for TIPS. Some of these experts have speculated that a successful futures contract could bolster the liquidity of TIPS. In a public discussion with the Chicago Mercantile Exchange Group (CME) in March 2009, Acting Assistant Secretary for Financial Markets Karthik Ramanathan explained that futures products help increase the liquidity, depth, and price transparency of the U.S. Treasury market. According to market experts, however, the lack of liquidity in the current TIPS market would make it difficult to sustain a viable TIPS futures product. In interviews with GAO in February 2010, primary dealers expressed different opinions on the structure of a potential inflation futures contract. We heard preferences for both a cash-settled index as the basis for an inflation futures contract and also an inflation futures contract with a basket of deliverables similar to how futures contracts for nominal securities are structured. Primary dealers told us that if TIPS were to become more liquid, then a TIPS futures contract might succeed, and that this in turn could further increase the liquidity of TIPS. One of Treasury’s important channels of communication is with primary dealers. Primary dealers that we interviewed told us that they are satisfied with their communication with Treasury. They told us they had recently raised concerns about what they see as consequences of the recent increase in direct bidding in Treasury auctions. Direct bidders are financial institutions that, like primary dealers, can bid for and buy Treasury securities competitively at auction directly from Treasury instead of in the secondary market. Unlike primary dealers, direct bidders are not required to participate in all Treasury auctions. Most Treasury securities are bought at auction by primary dealers. A much smaller, but growing volume of securities is purchased by direct bidders. In April 2004, Treasury stated that there were 825 “investors” making use of the auction system that allows direct bidding. Three months later, a Treasury press release announcing the new version of TAAPSLink, the communications system through which auction market participants are provided Internet-based access to Treasury auctions, said that over 600 “firms” used the on-line bidding system. This is the most recent information that Treasury has disclosed to the market on the potential number of direct bidders at an auction. Direct bidding has grown in size and volatility since 2008. Figure 10 illustrates both the overall increase in participation and the volatility of that participation. Direct bidder purchase share in auctions for 5- and 10- year notes and 30-year bonds began to trend upward and show greater variation starting on October 30, 2008, and then hit a 5-year high of almost 30 percent at the March 11, 2010, auction of 30-year bonds. During this period, the average direct-bidder purchase share of 5- and 10-year notes and 30-year bonds was 5.8 percent with a standard deviation of 5.3 percentage points. This contrasts with the period between May 5, 2003, and October 30, 2008, when direct bidders purchased an average of only 1.6 percent of 5- and 10-year notes and 30-year bonds. The standard deviation during this time period was 3.9 percentage points. Primary dealers have made public statements expressing concerns about both the increase and the unpredictable role of direct bidders in Treasury auctions. Through interviews, we learned that they had expressed their concerns to both Treasury and the FRBNY. Primary dealers said they believe both more direct bidding and the increase in the volatility of direct bidding “dis-incentivizes” primary dealers because it means they have less certainty of information surrounding a particular Treasury auction. For example, if an investor purchases Treasury securities directly at auction instead of going through a primary dealer, a primary dealer could have less information available about the auction. Volatility in direct bidding also increases uncertainty. Increased uncertainty could lead to primary dealers making less aggressive bids, which could lead to increased borrowing costs for Treasury. Some primary dealers also told us that an overall lack of transparency regarding direct bidding potentially contributes to “sloppy auctions.” A sloppy auction typically means poor reception or demand for a Treasury auction relative to what was expected and leads to higher yields at the auction. Treasury officials told us that they have not seen evidence of this and have also stated publicly that Treasury supports broad access to the auction process and that direct bidding fosters competition, therefore helping achieve its goal of the lowest cost of borrowing over time. According to primary dealers that we interviewed, part of the lack of transparency surrounding direct bidding comes from not knowing the exact number of direct bidders that could potentially bid at each auction and what sectors of the market they represent. One source of information that provides a breakdown of auction results by sector is Treasury’s data on Investor Class Auction Allotments, which is released on the 7th business day of each month. Primary dealers that we spoke with said t if Treasury were to provide this data on a more frequent basis it might alleviate some of the uncertainty that currently exists in the market. In 2008 and 2009, Treasury successfully raised unprecedented amounts o cash in a very short period of time. However, absent policy changes, themedium- and long-term fiscal outlook means that Treasury will have to continue to raise significant amounts of cash, while achieving its goal o the lowest cost of borrowing over time. Raising significant amounts of cash at the lowest cost of borrowing over time requires sufficien competitive participation at auctions. Information from market participants on their demand for Treasury securities, including the typ information that we received from our survey of the largest domestic holders of Treasury securities, is critical to this effort. Treasury initially raised cash to meet TARP and Recovery Act needs by issuing primarily short-term debt, including CMBs, dramatically changing the composition of its debt portfolio. In 2009, Treasury began to take s to return the composition of its debt portfolio to its pre–market cris is structure. In September 2009 we reported that a more robust TIPS program could benefit Treasury by diversifying and expanding its funding sources and reducing the cost of nominal securities. Treasury reaffir med teps its commitment to TIPS and announced plans to gradually increase issuance of TIPS. Through our survey of the largest domestic holders of Treasury securities in August 2009, we found that Treasury can improve the extent to which it receives sufficient information from end-inve We also found that options exist for Treasury to increase investor participation in TIPS auctions and further improve TIPS liquidity. We briefed Treasury on the findings contained in this report in October 2009, December 2009, and March 2010. stors. The Secretary of the Treasury should continually review methods for collecting market information and consider the following actions to help r demand in the context of projected sustained increases in gauge investo federal debt: conducting a systematic and periodic sur vey of the largest holders of Treasury securities in all sectors, and increasing the number of representatives on TBAC and ensuring resentation by including members that represent end- diverse rep investors. The Secretary of the Treasury should continue term to maturity of CMBs, when appropriate. to reduce the amount and The Secretary of the Treasury should consider increasing the number of throughout the year in TIPS auctions and distributing them more evenly order to improve participation in TIPS auctions. We requested comments on a draft of this report from the Secretary of the Treasury and received e-mailed comments on behalf of the Treasury from its Deputy Assistant Secretary of Federal Finance. Treasury agreed with our findings, conclusions, and recommendations, and said that the report captured Treasury’s actions clearly and succinctly. Treasury officials also pointed out that at the May 2010 quarterly refunding, they announced that (1) they are increasing the frequency of investor class data releases, and (2) they decided to increase the frequency of 10-year TIPS auctions, both of which are consistent with our recommendations. Treasury thanked us for our discussion of communications strategy and for the information provided from our survey. They noted that Treasury is always looking to improve its communication with market participants and they agreed that this is particularly important now given ongoing, elevated financing needs. Treasury also provided technical comments, which are incorporated into the report where appropriate. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Susan J. Irving at (202) 512-6806 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix IV. We analyzed the yield differential for all cash management bills (CMB) issued over a 2-year period beginning on January 1, 2008, and ending on December 31, 2009, removing from our analysis any cash management bills that were used for the Supplementary Financing Program (SFP). We used two methods to analyze the yield differential between CMBs and equivalent regular 4-, 13-, 26-, or 52-week Department of the Treasury (Treasury) bills. First, we compared CMB yields to recently auctioned Treasury bills of similar maturity. Second, we compared CMB yields to average secondary market yields on Treasury bills of similar maturity. There are limitations to both of these yield differential estimates. Neither captures any effect from the announcement of CMBs on yields for similar maturing bills. If the announcement of a CMB increased the yield on similar maturing bills, then our estimate may be understated. Also, in some cases, the surrounding Treasury bills we used could include CMBs that were reopenings of regular Treasury bills. This would also lead to an understatement of the yield differential because the yield on the outstanding securities including CMBs would be higher than outstanding securities that did not include CMBs. We compared CMB yields with the yields of similar Treasury bills that were auctioned the same day, or immediately before and after the date of the CMB auction. Once we identified two Treasury bills (one auctioned before and one after each CMB) with a maturity closest to the CMB, we derived a weighted average yield for the two bills. The weights were based on the relative difference in each bill’s auction date from that of the CMB, with the Treasury bill having a closer auction date receiving a greater weight and the weights summing to 1. Then, the weighted average Treasury bill yield was subtracted from the CMB auction yield to obtain the yield differential. In the final step, the yield differential was applied to the dollar amount of the CMB to obtain an estimate of the cost of issuing a CMB instead of a regular Treasury bill. Taking a second approach, we also calculated the difference between a CMB’s yield and the average secondary market yield on other Treasury bills that are most similar (in terms of maturity) to the CMB on the day of auction. This method was used in our previous report on cash management bills issued in 2006. See GAO, Debt Management: Treasury Has Refined Its Use of Cash Management Bills but Should Explore Options That May Reduce Cost Further, GAO-06-269 (Washington, D.C.: Mar. 30, 2006). To help achieve our objective of determining what changes the Department of the Treasury (Treasury) could make to better gauge end- investor demand and increase auction participation, we conducted a Web- based survey of domestic institutional investors in Treasury securities. In June 2009, we conducted 12 structured interviews with the two largest holders of Treasury securities in each of the following sectors: mutual funds; commercial banks; life insurance companies; property casualty insurance companies; state and local government retirement funds; and private pension funds. Based on what we learned in these interviews, in August 2009 we conducted a more comprehensive Web-based survey that was sent to the 12 holders of Treasury securities that we interviewed in June, as well as to additional holders of Treasury securities in each sector, with the exception of private pension funds. Private pension funds were excluded from the Web-based survey because our initial interviews revealed that their funds are managed primarily by external investment management companies represented in other sectors. Neither the structured interviews nor the Web-based survey are generalizible. We established two criteria for inclusion of a sector in the nonprobability sample for our 12 structured interviews. First, the sector had to have Treasury holdings in the top 20 of all sectors as of the third quarter of 2008, according to table L.209 of the Flow of Funds Account of the United States. Second, the sector had to be identified by market experts that we interviewed in February 2009 as having the potential to purchase large quantities of Treasury securities in the future. Both criteria were used to ensure that the sectors have a relevant financial stake in Treasury markets. The household sector and federal-government retirement funds sector were identified by the criteria, but not included in our sample. The household sector was not included due to the difficulty of identifying, ranking, and contacting individual household investors. In addition, it would have been beyond our ability to survey a sufficient number of households to reach the 50 percent market-share criterion that we later applied to the other sectors. The federal-government retirement funds sector was not included because the Thrift Savings Plan does not invest in nominal Treasury securities and Treasury Inflation Protected Securities (TIPS), and therefore, it was outside the scope of our survey. To identify the organizations within each sector that would receive our Web-based survey, we used rankings of the largest organizations in each sector based on total assets (or an equivalent financial indicator). From these ranked lists, we determined Treasury holdings for each organization, and selected as many organizations as needed to represent at least 50 percent of the total amount of Treasury holdings for that sector (based on table L.209 of the Flow of Funds Account of the United States, as of the third quarter 2008). This methodology was the same for the structured interviews mentioned above, except that for the structured interviews we selected and interviewed the two largest organizations in each sector. Several survey questions solicited open-ended responses from respondents. To analyze the responses to these questions, two GAO analysts separately reviewed the responses and identified themes for each item. They then developed a mutual list, which was used to independently code survey responses. Independently coded responses were then compared and successfully coded at 80 percent agreement or higher, with any remaining disagreements reconciled through discussion. At least 80 percent agreement was obtained in all cases. The coded responses were then used in two ways: (1) to obtain a sense of the range of perspectives on a given point, and (2) to obtain an idea of the frequency or extent to which a particular viewpoint or perspective was held by our survey respondents. n. Other - Please select an answer and specify below. o. Other - Please select an answer and specify below. p. Other - Please select an answer and specify below. If you answered "Other" in rows n through p above, please specify. Specify entry in "n" above: Specify entry in "o" above: Specify entry in "p" above: Treasury Inflation-Protected Securities (TIPS) 8. If your organization currently invests in TIPS, what percentage of your organization's TIPS purchases is dedicated based on active allocation decisions made by clients? (Enter percentage below.) following maturities? (Select one answer in each row.) d. 30-year TIPS (if introduced) 10. What are the primary reasons your organization purchases TIPS or plans to purchase TIPS in the future? (Please list up to five reasons in order of importance.) 11. In your opinion, what effect, if any, would a 30-year TIPS have on demand for TIPS securities with other maturities? Would increase demand Would have no effect on demand Would decrease demandNo basis to judge Treasury Inflation-Protected Securities (TIPS) - Continued 12. Do you anticipate any change in your organization's demand for TIPS from this year to next year? Yes - Continue with question 13. No (Click here to skip to question 15) No basis to judge (Click here to skip to question 15) (Check all answers that apply.) A reallocation into TIPS from nominal Treasury securities A reallocation out of TIPS into nominal Treasury securities A reallocation into TIPS from an increase in total assets A reallocation out of TIPS from a decrease in total assets A reallocation into TIPS from non-Treasury assets A reallocation out of TIPS into non-Treasury assets Other change(s) - Please specify below. No basis to judge If you answered "Other change(s)" above, please specify below. 14. What are the primary reasons behind the change in demand for TIPS from this year to next year? (Please list up to five reasons in order of importance.) 15. In your estimation, about what percent of your organization's TIPS purchases in the next year will be made through the following means? b. Secondary market: Please enter any comments you may have relating to your answer to question 15 above. 16. Would the following actions by Treasury increase the likelihood that your organization would: 1) participate in a TIPS auction, and 2) buy more securities at each auction? (Select one answer in each row.) a. Increase the frequency of TIPS auctions and b. Increase TIPS issuance amounts per auction c. Purchase off-the-run TIPS securities d. Other - Please answer and specify below. e. Other - Please answer and specify below. f. Other - Please answer and specify below. If you answered "Other" above, please specify other ways to increase participation or amount of securities bought. Specify entry in d. above: Specify entry in e. above: Specify entry in f. above: 17. The liquidity of TIPS has been found to be less than nominal Treasury securities. In your opinion, what actions could Treasury take to enhance the liquidity of TIPS? (Please list up to five actions in order of importance.) Action #5: 18. In your opinion, what are the risks that your organization faces as an investor in Treasury markets? (Please list up to five risks in order of importance.) 19. In your opinion, what actions could be taken to address and mitigate the risks identified in question 18 above? (Please list up to five actions corresponding to the risks identified in question 18 above.) Action #5: 20. In your opinion, to what extent, if at all, does Treasury currently receive sufficient information and input from end-investors? Very great extent Great extent Moderate extent Some extent Little or no extent No basis to judge 21. How effective, if at all, do you consider each of the following communication channels between your organization and Treasury to be at providing Treasury with sufficient information and input from end-investors? (Select one answer in each row.) a. Direct contact with Treasury debt management b. Direct contact with Federal Reserve officials c. Direct contact with Treasury Borrowing Advisory Committee (TBAC) members d. Direct contact with Primary Dealers e. Direct participation in TBAC or Primary f. Other - Please select an answer and specify below. g. Other - Please select an answer and specify below. h. Other - Please select an answer and specify below. If you answered "Other" in rows f through h above, please specify. Specify entry in "f" above: Specify entry in "g" above: Specify entry in "h" above: 22. What actions could Treasury take to ensure that it receives sufficient information and input from end-investors? (Please list up to five actions in order of importance.) 23. Are you ready to submit your final completed questionnaire to GAO? (This is equivalent to mailing a completed paper questionnaire to us. It tells us that your answers are official and final.) Yes, my questionnaire is complete - Click on the "Exit" button below to submit your answers. No, my questionnaire is not yet complete You may view and print your completed questionnaire by clicking on the Summary link in the menu to the left. In addition to the contact named above, Jose Oyola (Assistant Director), Tara Carter (AIC), Richard Cambosos, Stuart Kaufman, Mark Kehoe, Erik Kjeldgaard, Richard Krashevski,, Margaret McKenna, Donna Miller, Dawn Simpson, Jeff Tessin, Jason Vassilicos, Gregory Wilmoth, and Melissa Wolf all made contributions to this report.
This report is part of GAO's requirement, under the Emergency Economic Stabilization Act of 2008, to monitor the Department of the Treasury's (Treasury) implementation of the Troubled Asset Relief Program and submit special reports as warranted from oversight findings. It evaluates Treasury's borrowing actions since the start of the crisis, and how Treasury communicates with market participants in the context of the growing debt portfolio and the medium- and long-term fiscal outlook. GAO analyzed market data; interviewed Treasury, the Federal Reserve Bank of New York, and market experts; and surveyed major domestic holders of Treasury securities. The economic recession and financial-market crisis, and the federal government's response to both, have significantly increased the amount of federal debt. While the composition of Treasury's debt portfolio changed in response to this increase, Treasury has taken a number of steps in the past year to return the composition of the debt portfolio to pre-market crisis structure. One action Treasury has undertaken has been to reduce its reliance on cash management bills (CMB). While CMBs provided Treasury with needed borrowing flexibility immediately following the financial market crisis in 2008, Treasury paid a premium for its sustained use of CMBs in 2008 and 2009. In recent months, Treasury also has begun to stabilize shorter-term bill issuance and increase issuance of longer-term coupons. Given the medium- and long-term fiscal outlook, Treasury will continue to be presented with the challenge of raising significant amounts of cash at the lowest costs over time. This makes evaluating the demand for Treasury securities increasingly important. Sufficient information from market participants on their demand for Treasury securities, including the type of information that GAO received from its survey of the largest domestic holders of Treasury securities, will be critical as Treasury moves forward to meet these challenges. In GAO's survey, investors reported increased demand for Treasury Inflation Protected Securities (TIPS) and suggested ways for Treasury to further improve TIPS liquidity and thereby lower borrowing costs. Treasury receives input from market participants through a variety of formal and informal channels, but overall satisfaction with these communication channels varies by type of market participant. Market participants suggested to GAO a number of changes including increasing investor diversification on the Treasury Borrowing Advisory Committee (TBAC) and regular collection of information from end-investors. Primary dealers, who are satisfied with their communication, raised concerns about the recent increase in direct bidding and its effect on Treasury auctions.
Military enlistees must meet basic DOD and military service entrance qualification standards on age, citizenship, education, aptitude, physical fitness, dependency status, and moral character. Screening to determine whether applicants meet these standards or merit being granted a waiver begins with a recruiter’s initial contact with a potential applicant and continues through their entrance into basic training. In deciding whether to grant a moral waiver, the services employ the “whole person” concept: They consider the circumstances surrounding the criminal violations, the age of the person committing them, and personal interviews. As figure 1 shows, the services differ in both the way they categorize criminal offenses and the criteria they use for requiring moral waivers. In general, however, the services require moral waivers for convictions or adverse adjudications for criminal offenses as follows: (1) “felonies”—such as murder and grand larceny; (2) “non-minor (serious) misdemeanors”—assault and petty larceny; (3) “minor misdemeanors”—discharging a firearm within city limits and removing property from public grounds; (4) “minor non-traffic”—disorderly conduct and vandalism; (5) “serious traffic”—driving with revoked license and failure to comply with officer’s directions; and (6) “minor traffic”—speeding and driving without a license. The services, except for the Army, also grant moral waivers for preservice drug and alcohol abuse. None of the services grant waivers for certain offenses, such as the trafficking, sale, or distribution of illegal drugs. Appendix I provides detailed information about how often and for what reasons the services granted moral waivers to enlistees during the fiscal years 1990 through 1997 period. Overall, DOD’s Defense Manpower Data Center (DMDC) data for this 8-fiscal year period shows the following: moral waivers accounted for 62 percent of all waivers granted and represented 13 percent of all individuals enlisted; although annual DOD-wide enlistments fluctuated between about 162,000 and 223,000 during this period, the rate of granting moral waivers consistently declined from 17.5 percent to 7.8 percent of all enlistees—a total decrease of over 60 percent; of the moral waivers granted, non-minor (serious) misdemeanors and preservice drug and alcohol abuse categories accounted for over 75 percent, minor non-traffic and traffic offenses for about 20 percent, and felonies committed either as an adult or juvenile about 3 percent; and the number of moral waivers granted in all categories decreased, but felony and non-minor misdemeanor waivers increased as a percentage of total moral waivers granted. The services’ policies and procedures for screening for criminal histories and granting moral waivers are extensive and are intended to encourage applicants to reveal their criminal history information. However, because of limitations in records checks, the services are not always able to obtain or substantiate all available criminal history information. First, the majority of the national agency checks are conducted without using an applicant’s fingerprints to verify or search for records. Also, service policies and federal, state, and local laws and policies sometimes limit or preclude access to criminal history information, and the criminal history databases relied on by the services for record checks are incomplete. Of further concern is the services’ practice of sending enlistees to training before the results of criminal record checks are received, which incurs unnecessary costs. Each service screens for criminal background information in a similar manner. Figure 2 shows how the following screening tools fit in the recruiting process: (1) face-to-face interviews, briefings, and completion of forms; (2) law enforcement agency record checks at the state and local levels; and (3) national agency record checks conducted by the Defense Security Service. According to recruiting officials, screening to identify criminal histories begins when recruiters contact potential applicants informally—over the telephone, at shopping malls, or in schools. Through interviews and briefings listed in figure 3, the services provide applicants with as many as 14 different opportunities to disclose any prior criminal offenses and convictions to as many as 7 different recruiting, military entrance processing station, and training officials. The recruiting officials also stated that security interviews are conducted for applicants enlisting in jobs requiring secret or top secret clearances. Applicants are required to complete the following forms used in obtaining criminal history information: (1) Record of Military Processing—Armed Forces of the United States (DD Form 1966), (2) Personnel Security Questionnaire (SF-86), (3) the Police Record Check (DD Form 369), and (4) the Armed Forces Fingerprint Card (DD Form 2280). These forms elicit information on police record histories, drug and alcohol use and abuse, financial records and delinquencies, and any juvenile arrest or criminal activity. At this point, recruiters may request state and local background checks. After formal interviews with recruiters, applicants go to 1 of 65 military entrance processing stations to take the Armed Services Vocational Aptitude Battery test; undergo a physical exam; submit fingerprints; participate in more interviews and briefings; and take their first oath of enlistment, which formally enlists them as unpaid members of the Individual Ready Reserve forces and places them into the Delayed Entry Program. Entry into the Delayed Entry Program signals the beginning of the national agency check. Most of these record checks are conducted using descriptive data—an applicant’s name, social security number, sex, date of birth, and race—without using fingerprints. When the checks involve fingerprints, the services request a fingerprint verification—a comparison of an enlistee’s fingerprints against FBI criminal records to ensure that they are from the same individual whose name was associated with a possible arrest record identified through the descriptive data search. Also, during the Delayed Entry Program, recruiters are in contact with the enlistees and continue to inquire about their criminal background and any current contact with law enforcement agencies. If recruiters discover that enlistees have a criminal history or that they committed offenses while in the Delayed Entry Program, the enlistees may be discharged. After the Delayed Entry Program period, enlistees report again to a military entrance processing station where they undergo a second physical examination and more interviews and briefings and, if qualifications are met, take a second enlistment oath (which places them on active military duty). Subsequently, enlistees are asked again to disclose disqualifying information when they report to basic training, which lasts from 6 to 12 weeks depending on the service. By the 6-month point in their first terms, most enlistees have completed follow-on training in technical skills, though the length of such training can vary widely (from a few weeks to a year or more). Moral waivers can be initiated at any stage of the recruiting process—during contacts with recruiters, visits to the military entrance processing stations, or the Delayed Entry Program. The level at which the moral waivers are approved depends on the seriousness of the offense. Waivers for the most serious offenses must be approved by the commanders of the recruiting commands in the Army, the Navy, and the Air Force and by the two regional recruiting commanders in the Marine Corps. Applicants or enlistees that intentionally conceal any disqualifying information may be refused enlistment at any point during the recruiting process or, after enlisting, discharged for fraudulent enlistment. Quality control procedures have been incorporated into the recruiting process to ensure that recruiters do not conceal negative information about applicants. Each service (1) has established performance and moral character standards that recruiters must meet; (2) requires successful completion of a recruiter training course; (3) assigns some of its most senior recruiter personnel to military entrance processing stations; (4) conducts periodic inspections of recruiting activities; and (5) investigates all allegations relating to recruiter improprieties, which include an irregularity, misconduct, or malpractice. Malpractice, in particular, is considered by DOD to include willfully concealing disqualifying factors, misleading or misinforming applicants, or violating recruiting policies and procedures resulting in processing an ineligible applicant. Examples of recruiter malpractice include telling the applicant to not claim all dependents or to conceal bankruptcies or previous criminal history. DOD data for the 7-fiscal year period ending September 30, 1997, show that the percentage of recruiter impropriety investigations opened was less than 1 percent of the total DOD enlistments; the percentage of investigations substantiated was less than 0.1 percent of these enlistments. DOD’s checks of criminal history records are limited because (1) the majority of national agency checks are conducted without using fingerprints, (2) the services have limited access to criminal history information, and (3) criminal history data sources are incomplete. The services do not always require fingerprint verification because they do not believe the risk is great that enlistees will enter the service with undisclosed serious criminal histories, and they are concerned about the time and cost associated with fingerprint verification. However, it is the services’ policy to conduct national agency checks with fingerprint verifications when (1) the descriptive data check reveals a possible arrest record; (2) applicants are aliens in the United States, prior service persons, or individuals who have criminal record activity; or (3) any information is revealed that may require more investigation for a security clearance. As a result, 73 percent of the enlistees in fiscal years 1992 through 1997 were checked for criminal history information at the national level using only descriptive data—name, social security number, race, sex, and date of birth. Fingerprint verification checks were made on the remaining 27 percent, accounting for 32 percent of the cases in the Army, 25 percent in the Navy, 22 percent in the Marine Corps, and 20 percent in the Air Force. According to FBI officials, this fingerprint verification currently used by the services provides less certainty than a full fingerprint search, which compares an enlistee’s fingerprints against all criminal records in the FBI files. For example, fingerprint verification does not assure the services that the search results are accurate if an applicant has used an alias not recorded in the criminal records. A full fingerprint search is required to positively identify the person and detect when they have used undisclosed aliases. The services do not obtain or substantiate all available criminal history information because federal, state, and local laws and policies limit or prohibit access. DOD policy states that the military services shall obtain and review criminal history record information from the criminal justice system and Defense Security Service to determine whether applicants are acceptable for enlistment and for assignment to special programs. However, under the Security Clearance Information Act (5 U.S.C. 9101), criminal justice agencies are required to provide this information to DOD only when an individual is being investigated for eligibility for access to classified information or sensitive national security duties. These agencies, which include federal, state, and local agencies, are not required to provide this information for determining basic eligibility or suitability for enlistment (i.e., employment). DOD gains access to this information through the national agency checks, which are used for granting security clearances to enlistees. These national agency checks are initiated by military entrance processing station personnel for all enlistees soon after they enter the Delayed Entry Program and are employed as unpaid members of the reserves. Recruiters attempting to gain assess to this information for screening applicants prior to sending them to the military entrance processing stations, however, cannot obtain it when state and local laws and policies restrict access. The sooner applicants’ criminal records are known to military managers, the sooner they can make informed decisions about whether to grant moral waivers. Section 520a of title 10 of the U.S. Code authorizes DOD and the services to request from state and local government criminal history record information regarding enlistees. However, state and local policies sometimes prohibit the release of information, or require fees or fingerprints to obtain it. A telephone survey of the states by the Navy Recruiting Command in 1996, showed that 43 states released information on crimes committed by adults. The survey also showed that 33 states charged fees ranging from $5 to $59 and that 18 states and the District of Columbia required fingerprints before releasing information. The Army has a policy to request local and state record checks for all applicants, but will not pay these fees, and therefore, does not obtain information from states that charge fees. The other services request these record checks only if an applicant admits to a criminal history. Navy and Marine Corps policy allows recruiters to pay for the checks; Air Force policy requires applicants to obtain the checks and pay any fees associated with the checks. Further, because the services do not take fingerprints until after local and state record checks have been requested, the services do not obtain information from 18 states and the District of Columbia. Finally, recruiters frequently cannot obtain information on applicants’ juvenile criminal records. Generally, most state laws restrict access to juvenile records. The 1996 Navy survey showed that only three states release these records. In addition, under 18 U.S.C. 5038, federal juvenile delinquency proceedings’ records are safeguarded from disclosure to unauthorized persons. Specifically, federal juvenile records may not be disclosed for any purposes other than judicial inquiries, law enforcement needs, juvenile treatment requirements, employment in a position raising national security concerns, and disposition questions from victims or victims’ families. These juvenile crime records are likely to be a major source of criminal history information for the population targeted by military recruiters—men and women generally 17 to 21 years old. However, according to Department of Justice officials, when juveniles are charged with serious crimes such as murder and rape, most states try them as adults in criminal court. Their records, if reported by states, are available in the FBI’s national criminal records system. Criminal history checks, therefore, should identify many of the more serious juvenile criminal offenders who are tried as adults. In 1992, the Department of Justice revised its regulations (28 C.F.R. 20.32) to allow the FBI to collect, maintain, and provide authorized access to juvenile records for juveniles tried or otherwise adjudicated in juvenile proceedings. Before 1992, the FBI was prohibited from collecting juvenile records with the exception of those cases when a juvenile had been processed as an adult. However, according to Department of Justice officials, each state determines whether its own laws permit submitting these juvenile records to, or authorizing access through the FBI. Also, states may elect not to record the offense, and local law enforcement may decide to label the offense a status violation (truancy, for example) rather than a criminal violation. As of February 1998, about 213,700 (less than 1 percent) of the 37,857,111 criminal subjects in the FBI’s identification records system were under the age of 18. Department of Justice studies have shown for decades that criminal history databases are incomplete and, as discussed in the next section, they have funded initiatives for improvements. The FBI considers a record to be complete when all significant events, such as the arrest and disposition, are available. A complete record also includes the individual’s name, social security number, age, sex, fingerprints, and other physical descriptive type information. According to FBI officials, completeness of the FBI database is dependent upon states’ submissions of arrest information and court disposition actions, and the states depend on local agencies to submit information to the state repository. Reporting of this information by all levels of law enforcement agencies to the next higher level is voluntary and does not always occur. The Department of Justice periodically requests information from the states regarding the completeness of their criminal history databases. As of December 31, 1997, among the 50 states and the District of Columbia, the percentage of arrest records that have final dispositions recorded varied greatly, ranging from 5 to 98 percent. Also, for arrests within the last 5 years, three states reported that less than 30 percent of their records were complete. Conversely, nine states reported that 90 percent or more of their records were complete for the same period. At the federal level, as of June 1998, the FBI database had a total of 76,427,487 active arrests, but dispositions were on file for only 46 percent of the arrests. According to a Department of Justice Assistant Attorney General, state criminal records systems tend to be more comprehensive than the federal system. This is particularly true in the case of non-felony arrests and convictions. Many nonserious offenses are either not reported to the FBI or, once reported, are not retained because they fail to satisfy retention criteria specified in regulation (28 C.F.R. 20.32). For example, the FBI is prohibited from maintaining nonserious offenses such as drunkenness, traffic violations, and vagrancy. The FBI database, however, includes reports of vehicular violations, which resulted in personal injury or property damage and driving while under the influence of alcohol or drugs. The military services’ policies allow enlistees to begin basic and follow-on training and, in some cases, enter their first-duty stations before investigative results of record checks are available. If the national record search does not reveal that an enlistee has a criminal history, results from the national agency check are usually received during the Delayed Entry Program. If the national record search reveals that an enlistee has a criminal history, the national agency check usually takes longer in order to positively identify the individual, obtain records, and in some cases, conduct an investigation. The results of this check may not be available until after the beginning of basic training. In some cases, an enlistee may be in a follow-on technical school or even at a first-duty station before the results of investigative reports are received. The frequency with which enlistees enter basic and follow-on training with undisclosed serious criminal histories and are subsequently discharged because of unfavorable record checks is unknown. The Navy, however, had limited data regarding the actions taken as a result of this unfavorable information. During the first 11-1/2 months of 1997, the Navy reviewed 2,368 enlistee cases that contained unfavorable criminal history information; 389 (16.4 percent) were subsequently discharged because of unfavorable information. When enlistees are discharged from service after beginning basic training, the services incur training costs that could have been avoided. On the basis of the Navy’s 389 discharges, we estimate that the Navy incurred over $2 million in unnecessary costs. The other services could not provide data that would allow us to make comparable estimates. The services risk having to absorb these costs because they are trying to avoid the cost of not filling allotted training slots. Only the Army conducts an in-depth interview with enlistees whose record checks have not been received to determine the possibility of a concealed record and assigns them control numbers before sending them to basic training. Army officials told us that, with few exceptions, no one is sent to a first-duty station unless the records check has been received. There are several ongoing initiatives that would help DOD to improve the process for obtaining complete and timely criminal history information and avoid enlisting and training individuals with undesirable backgrounds. These initiatives include more thorough background checks using full fingerprint searches and credit checks, automation of security questionnaire information, a new FBI fingerprint imaging and classification system, and continuing efforts to improve the completeness of the criminal history database. Although these initiatives cover several aspects of the criminal history screening process, fall under the responsibility of various organizations, and would require some changes in current policies and procedures, DOD has not developed an approach for planning and coordinating their implementation. As a result, it is not yet in a position to take full advantage of the benefits of these initiatives. First, on January 1, 1999, DOD implemented Executive Order 12968, signed August 4, 1995, which expands the requirements for background investigations for all individuals in jobs requiring a security clearance. The Defense Security Service will be responsible for conducting a (1) national agency check using fingerprints; (2) local agency check, which requests local jurisdictions to provide criminal record information; and (3) credit check that provides information on financial responsibility. (Prior to January 1, 1999, the minimum requirement for background investigations for enlistees requiring secret and confidential clearances included the national agency check using only descriptive data, not fingerprints.) This new requirement will increase the quality of criminal history record checks for those enlistees filling jobs requiring a security clearance. Second, the Defense Security Service requested that, by January 1, 1999, all DOD activities exclusively use the Electronic Personnel Security Questionnaire, which replaces the paper version of the SF-86. The automated form allows personnel security data to be more efficiently recorded, checked for completeness, and transmitted in electronic form. Also, the Defense Security Service will be able to expedite its performance of background investigations and efficiently store information for future retrieval. Third, in July 1999, the FBI plans to implement the Integrated Automated Fingerprint Identification System. The FBI developed this system to capture, submit, process, match, and store fingerprints in a paperless environment, which will permit electronic—rather than manual—fingerprint searches. With it, the FBI expects that (1) electronically scanned fingerprints will be more readable—thereby eliminating the delays caused by rejecting smudged fingerprints, which must be resubmitted; (2) fingerprint matches will be more accurate because more fingerprint detail will be taken into account; (3) the turnaround time for fingerprint searches for DOD national agency checks will be reduced—24 hours instead of the current average of 16 days; and (4) the workload of full fingerprint searches for DOD could be processed in a timely manner without a significant change to current fees. Finally, during the last several years, the need to improve the quality of criminal history records has been one of the major challenges facing federal, state, and local criminal justice agencies. The FBI Criminal Justice Information Services Division’s Strategic Plan has a goal of having at least 80 percent of its criminal history records complete (containing both arrest and disposition information) by fiscal year 2003. Also, the Department of Justice has supported three major programs since 1988 that provide funding incentives to the states to improve the accuracy and completeness of criminal record information. During fiscal years 1990 through 1998, these programs awarded over $1.47 billion to the states. DOD does not have a clear strategy for implementing these initiatives. First, regarding the implementation of Executive Order 12968, the services have not determined the number of enlistees that will require a security clearance and, therefore, be subject to the required expanded background checks. Currently, about 50 to 60 percent of military jobs require a security clearance, and according to an Assistant Secretary of Defense official, the number may increase as technology becomes more sophisticated. Also, the services have not determined when these investigations will occur. If the Defense Security Service initiates record checks early in the recruiting process, the services could avoid the costs incurred when enlistees are sent to basic training before receiving disqualifying criminal history information. Second, the Defense Security Service has made the new Electronic Personnel Security Questionnaire available and provided training; however, with the exception of the Air Force, use of the form has been extremely limited. According to Military Entrance Processing Command officials, the services have not used the new form because of technological limitations. The Office of the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) told the services that investigations may take longer and be more costly when this new form is not used. Third, regarding electronic fingerprinting, although several military entrance processing stations have tested electronic fingerprint scanners, DOD has not determined how it will use this automated system to enhance the quality and timeliness of their record checks. Also, DOD and the FBI have not reached agreement regarding the options that will be available for new services and costs that will be incurred under the FBI’s Integrated Automated Fingerprint Identification System. Furthermore, DOD has not formulated a coordinated approach for integrating these initiatives into the recruiting process to address some of the deficiencies in their record checks. The DOD officials pointed out that the initiatives have not been implemented yet and that DOD was dependent on the Department of Justice to make available the new fingerprint technology and provide greater completeness of the national criminal records database. However, DOD is responsible for and will be implementing in 1999, the Executive Order 12968 requirements for more thorough security clearance background investigations and the Electronic Personnel Security Questionnaire. The services and their recruiting commands, the Military Entrance Processing Command, and the Defense Security Service have not yet determined how they will implement these initiatives within their current recruiting practices or whether new practices are needed to take full advantage of the possible benefits. The services have extensive policies and procedures for gathering self-reported criminal history information and granting moral waivers. Their reliance, however, on applicant self-disclosure, completion of required forms, and criminal history record checks from state, local, and national criminal history databases without a full fingerprint search limits the screening process and increases the risk of enlisting individuals with undesirable backgrounds. Use of the Electronic Personnel Security Questionnaire could minimize the time and costs associated with investigations conducted as part of the Defense Security Service’s national agency checks. Use of the FBI’s Integrated Automated Fingerprint Identification System could facilitate the use of full fingerprint searches as part of the recruiting process and make the record checks more thorough. Collectively, these initiatives give DOD the opportunity to more fully obtain and substantiate criminal history information in a timely manner, avoid enlisting individuals with undesirable backgrounds, and eliminate the need to send enlistees to training before all criminal history information is available. Implementing these initiatives would also enable DOD to benefit from having more complete criminal history information available as a result of the database improvements funded by the Department of Justice. However, DOD has not determined how it will integrate these initiatives into its current criminal history screening process and, therefore, has not put itself in a position to take full advantage of them. Because these initiatives cover several aspects of the screening process, fall under the responsibility of various organizations, and represent some changes in current policies and procedures, it is essential that DOD carefully plan and coordinate its efforts to implement them. Therefore, we recommend that the Secretary of Defense take the following actions: Develop and monitor a DOD-wide plan to use the initiatives cited in this report. Such a plan should, at a minimum, incorporate the benefits of using the Defense Security Service’s Electronic Personnel Security Questionnaire and the FBI’s Integrated Automated Fingerprint Identification System. Additionally, the plan should address the integration of these two initiatives with the expanded security clearance background investigation requirements contained in Executive Order 12968. The plan should also include specific time frames and budget requirements for implementation. Require all national agency checks for enlistment into the military services to be based on a full fingerprint search to (1) reduce the risks associated with enlisting individuals who have been convicted of the more serious misdemeanors and felonies and (2) identify individuals who have used aliases. Direct the services, after the initiatives available in 1999 are in use, to end their practices of sending enlistees to training and to first-duty stations without having all available criminal history information. In commenting on a draft of this report, DOD and the Department of Justice generally concurred with the report findings and recommendations, and emphasized several areas of concern. DOD described its plans to act on the report recommendations as follows: To develop and monitor a DOD-wide plan to use the initiatives cited in this report, DOD stated that the Defense Accession Data Systems Integration Working Group, chaired by the Deputy Director of Operations, Military Entrance Processing Command, has identified the need to establish a subgroup led by the Office of the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) to address these initiatives and develop a DOD-wide plan. The Working Group discussed plans for the subgroup at its quarterly meeting in January 1999. To reduce the risks associated with enlisting individuals who have been convicted of the more serious misdemeanors and felonies, and to identify individuals who have used aliases, DOD stated that it will require a full fingerprint search for all potential enlistees. It noted, however, that implementation will depend upon availability of automated fingerprint scanners at the military entrance processing stations. Regarding the services’ practices of sending enlistees to training and first-duty stations without having all available criminal history information, DOD stated that before directing such a change, a system needs to be developed to ensure a prompt turnaround time and allow the flexibility to process applicants without completed criminal history checks as exceptions to policy when criminal history information is delayed. DOD emphasized that enlistment screening will be improved with a system that ensures prompt availability of all applicant criminal history information, including that from state and local law enforcement agencies, including juvenile records. DOD noted that our report does not fully address its need for timely access to state and local criminal information at a reasonable cost. It noted that many records of youth crime do not reach national databases. DOD commented that the absence of complete data makes it difficult to evaluate enlistment waiver rates because the services cannot waive offenses they cannot identify. The Department of Justice also stated that DOD needs to obtain juvenile records presently protected under existing state laws. We agree that juvenile criminal records may contain information that would provide DOD with a more complete assessment of the criminal histories of applicants and our report generally describes limitations on access beginning on page 12. However, evaluating the pros and cons of access to juvenile records was beyond the scope of our review, and we clarified the Scope and Methodology section accordingly. The Department of Justice also emphasized that fingerprint verification currently used by the military services is not to be confused with, nor is it a substitute for, positive identification by a full fingerprint search. It believes that only through a full fingerprint search can the military be assured that enlistees have not fraudulently listed their identities. The Department of Justice provided additional information to support its views on the importance of full fingerprint searches, which our report recommends. We agree with the distinction between fingerprint verification and full fingerprint searches and modified the report to clarify this point. DOD’s and the Department of Justice’s comments are presented in their entirety in appendixes II and III, respectively. DOD and the Department of Justice also provided technical comments, which we have incorporated as appropriate. This review focused on DOD’s policies and procedures for screening criminal history information for enlistees, including national agency checks, and for granting moral character waivers. To determine the extent to which relevant criminal history information on potential enlistees is available to the DOD military services, we reviewed the Air Force, the Army, the Marine Corps, the Navy, and the U.S. Military Entrance Processing Command policy guidance and regulations and discussed them with recruiting command and U.S. Military Entrance Processing Command officials. Also, we discussed with these officials the internal control and quality assurance procedures used to monitor screening procedures. We reviewed applicants’ enlistment files at three military entrance processing stations to determine whether screening procedures had been followed. To identify federal government initiatives that could improve the process of obtaining criminal history information, we interviewed DOD and Department of Justice officials and discussed the new requirements for security clearances, the Integrated Automated Fingerprint Identification System, automation of security questionnaire information, and continuing efforts to improve the completeness of the criminal history database. Regarding the completeness of and access to state and local records, we obtained information from DOD and Department of Justice officials. We did not obtain information directly from state and local officials regarding their laws and policies pertaining to DOD’s access to their criminal history records. Also, we did not assess the pros and cons of restricted access to juvenile criminal history records. To supplement our objectives, we analyzed DMDC enlistment and waiver data for fiscal years 1990 through 1997 to determine the extent to which the services granted moral waivers and the type of moral waivers approved. To determine the reasons and rates of separations for enlistees granted moral waivers compared with those without moral waivers, we analyzed DMDC separation data for enlistees entering the military in fiscal years 1990 through 1993 who separated within their first 4 years of service. Fiscal years 1990 through 1993 were the most recent years for which complete separation data were available. We performed our work at the following DOD, service, and Department of Justice locations: Directorate for Accession Policy, Office of the Assistant Secretary of Defense, Force Management Policy, Washington, D.C.; Security Directorate, Security and Information Operations, Office of the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence), Washington, D.C.; and Defense Security Service, Baltimore, Maryland; U.S. Army Recruiting Command, Ft. Knox, Kentucky; Navy Recruiting Command, Arlington, Virginia; Marine Corps Recruiting Command, Arlington, Virginia; and Air Force Recruiting Service, Randolph Air Force Base, San Antonio, Texas; U.S. Military Entrance Processing Command, North Chicago, Illinois; Military Entrance Processing Station, San Antonio, Texas; Military Entrance Processing Station, Chicago, Illinois; and Military Entrance Processing Station, Richmond, Virginia; and FBI, Washington, D.C.; FBI Criminal Justice Information Services Division, Clarksburg, West Virginia; Office of Justice Programs, Bureau of Justice Statistics and Bureau of Justice Assistance, Washington, D.C.; and Office of Juvenile Justice and Delinquency Prevention, Washington, D.C. We conducted our review from October 1997 to January 1999 in accordance with generally accepted government auditing standards We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force, and the Commandant of the Marine Corps. We are also sending copies to the U.S. Attorney General; the Director, FBI; the Administrator, Office of Juvenile Justice and Delinquency Prevention; and the Administrator, Office of Justice Programs. We will make copies available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. To supplement the overall objectives of this review, we analyzed Defense Manpower Data Center (DMDC) enlistment and separation data. Our objectives were to (1) determine how often and for what reasons the services granted moral waivers to enlistees and (2) compare the reasons for separation for those enlistees who entered the services with and without moral waivers. For sound analyses, we needed high-quality data that were accurate, reliable, and comparable. DMDC data, however, are of limited quality because enlistees may have entered military service without their past criminal history records being discovered and, therefore, entered without a moral waiver that should have been granted. Also, the services and the Military Entrance Processing Command apply moral waiver codes inconsistently, and the services differ in the way they use separation codes. Nonetheless, until the Department of Defense (DOD) completes its database improvements to standardize definitions and coding structures for enlistment and separation data, the DMDC data are the best available for describing DOD’s experiences with granting moral waivers. Given these data limitations, however, the following analyses generally indicate that the number and percentage of new active duty enlisteesgranted moral waivers has consistently decreased during the 8-year period ending fiscal year 1997. Furthermore, during the first 4 years of service, enlistees granted moral waivers in fiscal years 1990 through 1993 generally separated from military service for similar reasons and at comparable rates to those enlistees who were not granted moral waivers. Table I.1 shows the number and percentage of enlistees granted moral waivers for fiscal years 1990 through 1997 for each service and DOD-wide. Table I.1: Number and Percentage of Enlistees Granted Moral Waivers (fiscal years 1990-97) (7.2) (6.7) (5.6) (4.9) (5.1) (3.1) (2.9) (18.2) (16.7) (16.2) (16.2) (17.3) (18.8) (14.7) (59.2) (49.7) (29.3) (22.0) (16.2) (12.4) (11.7) (2.9) (4.8) (7.2) (6.2) (6.7) (6.3) (6.2) (17.7) (16.0) (12.8) (10.9) (10.4) (8.9) (7.8) Table I.2 shows the types, number, and percentages of moral waivers granted to enlistees DOD-wide for fiscal years 1990 through 1997. As shown, the services are granting fewer moral waivers in all categories. Although felony and non-minor misdemeanor waivers increased as a percentage of total waivers granted over the period (from 2 to 5 percent for felonies and 33 to 58 percent for non-minor misdemeanors), the actual number of these waivers granted decreased from 857 to 705 for felonies and from 12,858 to 8,542 for non-minor misdemeanors. Table I.2: Type, Number, and Percentage of Moral Waivers Granted to Enlistees DOD-wide (fiscal years 1990-97) Total (1990-1997) The services could not explain the reasons for these trends. However, we were told that the following service policy changes in waiver criteria account for some, but not all of the changes: In July 1994, the Marine Corps, which had the largest decrease, loosened its requirements for minor traffic offense criteria from “more than three” to “more than four.” At the same time, preservice drug abuse criteria were tightened to include any marijuana experimentation or use. In fiscal year 1995, the Army revised its moral waiver criterion for non-minor misdemeanors from one offense to two. The Navy’s granting of moral waivers remained fairly constant until fiscal year 1997. Prior to 1997, the Navy waivers included the moral waivers granted for both enlistment and special programs such as advanced electronics and nuclear fields, which required more stringent moral character standards. In fiscal year 1997, however, the Navy began to report enlistment and program moral waivers separately. The Air Force’s granting of moral waivers increased during the 8-year period. Air Force officials could not specify the reasons for this increase, but suggested the following factors: (1) fluctuations in Air Force moral waiver criteria for minor traffic violations; (2) changing attitudes of law enforcement and judicial communities, such as getting tough on crime, greater use of adverse adjudications, and community service; and (3) decreasing trends in Air Force enlistments. Of the enlistees beginning service during fiscal years 1990 through 1993 (the most recent years for which most separation data are available), 573,160 separated within their first 4 years of service for the reasons shown in figure I.1. Of these separations, the 93,632 enlistees granted moral waivers separated from the enlisted force within 4 years of service for generally the same reasons and at similar rates as the 479,528 who enlisted without moral waivers. Figure I.1: Reasons and Rates for DOD-wide Separations for Individuals Enlisting During Fiscal Years 1990-93 and Separating Within Their First 4 Years of Service Released 7.5% (42,953) Officer Candidate School (OCS) 0.9% (5,171) Hardship/death/other 6.6% (37,716) Exp. Term of Service Completed enlistment term 26.7% (153,302) Substandard performance 11.1% (63,609) Completed enlistment term and reenlistedImm. Reenlistment16.1% 16.1% (92,144) Misconduct 14.5% (83,190) (Figure notes on next page) Regarding the principal positive reasons for separating, 31 percent of those granted a moral waiver completed their term and left the service compared with 26 percent of those without a moral waiver. However, as shown in figure I.2, an additional 17 percent of those without a moral waiver not only completed their initial term but also immediately reenlisted compared with 9 percent of those with a moral waiver. Figure I.2: Reasons and Rates for DOD-wide Separations for Enlistees With and Without a Moral Waiver (fiscal years 1990-93) (excludes medical, hardship, and other) Percent Substandard perf. For those leaving the service before completing their initial terms, enlistees not granted a moral waiver left more often for substandard performance reasons (such as failure to meet minimum qualifications and unsatisfactory performance), and enlistees granted moral waivers left more often for misconduct reasons. Of the 16 misconduct reasons, drugs and fraudulent enlistment accounted for about two-thirds of the 7.3 percentage point difference between separating enlistees with and without moral waivers; the two groups differed very little in the other 14 misconduct reasons. Further, as shown in table I.3, enlistees with moral waivers for minor traffic and minor non-traffic offenses and preservice drug and alcohol abuse separated more often for drugs, fraudulent entry, alcoholism, and court martial than those enlisted with no moral waiver. Enlistees that entered the services with non-minor (serious) misdemeanor waivers generally separated at similar rates and for the same misconduct reasons (except for drugs and alcoholism) as those without waivers. Enlistees with felony waivers separated at a higher rate for fraudulent entry, court martial, and alcoholism. Table I.3: DOD-wide Separation Rates for Misconduct by Type of Moral Waiver (fiscal years 1990-93) In addition, figure I.3 shows that enlistees granted moral waivers leave at generally the same point (first year, for example) during their first enlistment for misconduct and substandard performance as those without moral waivers. Figure I.3: Time of DOD-wide Separations for Misconduct and Substandard Performance Reasons for Enlistees With and Without a Moral Waiver (fiscal years 1990-93) Military Attrition: Better Data, Coupled With Policy Changes, Could Help the Services Reduce Early Separations (GAO/NSIAD-98-213, Sept. 15, 1998). Military Attrition: DOD Needs to Better Analyze Reasons for Separation and Improve Recruiting Systems (GAO/T-NSIAD-98-117, Mar. 12, 1998). Military Attrition: DOD Needs to Better Understand Reasons for Separation and Improve Recruiting Systems (GAO/T-NSIAD-98-109, Mar. 4, 1998). Military Recruiting: DOD Could Improve Its Recruiter Selection and Incentive Systems (GAO/NSIAD-98-58, Jan. 30, 1998). Military Attrition: Better Screening of Enlisted Personnel Could Save Millions of Dollars (GAO/T-NSIAD-97-120, Mar. 13, 1997). Military Attrition: Better Screening of Enlisted Personnel Could Save Millions of Dollars (GAO/T-NSIAD-97-102, Mar. 5, 1997). Military Attrition: DOD Could Save Millions by Better Screening Enlisted Personnel (GAO/NSIAD-97-39, Jan. 6, 1997). Military Recruiting: More Innovative Approaches Needed (GAO/NSIAD-95-22, Dec. 22, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) processes for investigating military enlistees' criminal history, focusing on: (1) the extent to which relevant criminal history information on potential enlistees is available to the military services; and (2) federal government initiatives that could improve the process of obtaining criminal history information. GAO noted that: (1) the military services have extensive policies and procedures for encouraging applicants to self-report criminal history information; (2) among other things, the services repeatedly query each applicant, providing as many as 14 opportunities to disclose any criminal offenses to as many as seven different service and military entrance processing station officials; (3) the services also conduct periodic inspections and investigations to ensure the integrity of the entire recruiting process, which includes the disclosure of disqualifying information; (4) the services, however, are not always able to obtain or substantiate all available criminal history information because of service policies and federal, state, and local laws and policies that sometimes preclude access; (5) the services do not use fingerprints to substantiate the majority of enlistees' criminal histories; (6) without full fingerprint searches, the services cannot detect undisclosed aliases and ensure that they are aware of all available criminal history information; (7) federal law and state and local laws and policies, which generally limit or prohibit disclosure of criminal history information, impede the recruiting community's access to certain criminal history information; (8) in addition, state and local governments sometimes charge fees or require fingerprints to release the information; (9) available criminal history databases (not controlled by DOD) are incomplete; (10) of further concern is the services' practice of sending enlistees to basic training before the results of criminal record checks are received; (11) this practice results in training costs that could be avoided; (12) several DOD and Department of Justice initiatives are underway that could improve the access of obtaining criminal history information; (13) these initiatives have the potential of making available to DOD and the services more complete information upon which to make moral waiver decisions and expedite the process for obtaining record checks; and (14) however, DOD and the services have not yet formulated a coordinated approach for using these initiatives to better ensure that the military does not enlist and train individuals with undesirable backgrounds.
Congress passed DAWIA in 1990 to ensure effective and uniform education, training, and career development of members of the acquisition workforce. Accordingly, the act established DAU to provide training for the DOD acquisition workforce and charged DOD officials with designating acquisition positions, setting qualification requirements, and establishing policies and procedures for training the acquisition workforce. DOD, as part of implementing DAWIA, established career fields, such as program management (See table 1). The act also required DOD to establish career paths, referred to by DOD as certification requirements, for the acquisition workforce. DOD military services and defense agencies must track that acquisition workforce members meet mandatory standards established for level I (basic or entry), level II (intermediate or journeyman), or level III (advanced or senior) in a career field, such as contracting, life-cycle logistics, and program management. DAU is responsible for certification training and for designing, maintaining, and overseeing the delivery of certification training courses at each level, among other things. For each career field and level, there are requirements in three areas—education, experience, and training. Certification requirements are the same for civilian and military acquisition workforce members. Table 2 shows the nature of certification training for one of the DAWIA career fields—system planning, research, development, and engineering (SPRDE)—systems engineering, as well as shows the education and experience requirements for each level in the career field. Besides the certification training it offers, DAU approves alternative certification training providers based on a review by an independent organization—the American Council on Education—of the capability of a potential provider to offer acquisition training and whether the provider’s course content addresses the DAU course’s learning outcomes. An equivalent course provider must certify annually that its course is current with the DAU plan of instruction for the course. Similarly, DCAI provides both required certification training and supplemental training for the auditor career field. In addition to certification training, DAU offers supplemental training for each career field and for particular types of assignments. For example, for level II contracting in contingency or combat operations, DAU provides courses such as a contingency contracting simulation, a contingency contracting officer refresher, and a joint contingency contracting course. DAU also provides continuous learning modules online to provide acquisition workforce members with a quick reference for material already introduced and courses to help them maintain currency in their career field by achieving the required 80 continuous learning points biennially. Additionally, DAU provides consulting support to program offices, rapid-deployment training on new initiatives, and training targeted to the needs of acquisition field organizations. DAU also engages in knowledge-sharing initiatives, including hosting a number of acquisition communities of practice and providing Web-based acquisition policy and reference materials. In March 2004, we issued a guide for assessing federal training programs that breaks the training and development process into four broad, interrelated components—(1) planning and front-end analysis, (2) design and development, (3) implementation, and (4) evaluation. The guide discusses attributes of effective training and development programs that should be present in each of the components and identifies practices that indicate the presence of the attribute. For example, under the design and development component, to determine whether an organization possesses the attribute of incorporating measures of effectiveness into courses it designs, the guide suggests looking for practices, such as (1) clear linkages between specific learning objectives and organizational results and (2) well-written learning objectives that are unambiguous, achievable, and measurable. For a complete list of the attributes of effective training and development programs, see appendix II. Figure 1 depicts the training and development process along with the general relationships between the four components that help to produce a strategic approach to federal agencies’ training and development programs. These components are not mutually exclusive and encompass subcomponents that may blend with one another. Evaluation, for example, should occur throughout the process. DOD’s acquisition workforce certification training—-centrally administered by DAU—has many attributes of effective training programs that demonstrate the capability to deliver training. DAU’s certification training program has a formal process in planning and front-end analysis to ensure that strategic and tactical changes are promptly incorporated into training; use of centralized and decentralized training approaches in design and development; data collection during implementation to ensure feedback on its training programs; and appropriate analytical approaches to assess its training during evaluation. However, DOD lacks complete information on the skill sets of the current acquisition workforce for planning and front-end analysis and does not have metrics to assess results achieved in enhancing workforce proficiency and capability through training efforts during evaluation. Complete data on acquisition skill sets are needed to accurately identify workforce gaps, and appropriate metrics are necessary to increase the likelihood that desired changes will occur in the acquisition workforce’s skills, knowledge, abilities, attitudes, or behaviors. DOD’s certification training program possesses attributes of effective training programs in each of the four components of the training and development process. Following are examples of the attributes we observed in DOD training categorized by the components of effective training programs. Planning and front-end analysis: Planning and front-end analysis can help ensure that training efforts are not initiated in an ad hoc, uncoordinated manner, but rather are strategically focused on improving performance toward the agency’s goals. DAU had processes to ensure that training efforts were coordinated and focused on improving agency goals. Through a formal process that ensures that strategic and tactical changes are promptly incorporated into training, DAU and other DOD stakeholders plan for and evaluate the effectiveness of DAU’s training efforts. Each career field has a functional leader, a senior subject-matter expert in the career field who is responsible for annually certifying that course content for certification is current, technically accurate, and consistent with DOD acquisition policies. Functional leaders are supported by a functional integrated process team for each career field, which consists of subject- matter experts, acquisition career management representatives from the military services and other DOD agencies, and DAU representatives. The functional integrated process team analyzes and reviews data, including end-of-course evaluations, number of students completing a class, wait lists, and certification rates, as well as DOD policy changes and recommendations from reviews, such as the Gansler Commission to support functional leaders. DAU designs courses in accordance with the functional leader and functional integrated process team decisions. Using this process, strategic and tactical changes were promptly incorporated into training. For example, DAU developed and fielded a new contracting course on federal acquisition regulation fundamentals within a year of direction by the functional leader’s organization to create it. Design and development: In design and development, it is important that agencies consciously consider the advantages and disadvantages of using centralized and decentralized approaches. Centralizing design can enhance consistency of training content and offer potential cost savings. DAU evaluates and uses centralized and decentralized approaches for training after considering the advantages and disadvantages. DAU’s curriculum development and technologies organizations located at Fort Belvoir, Virginia, provide centralized, integrated design and development of certification courses. These courses are then delivered to the acquisition workforce by five regionally-oriented campuses and the Defense Systems Management College School of Program Managers. DAU also compares training delivery mechanisms to determine the appropriate use of different delivery mechanisms (such as classroom or computer- based training) and to ensure efficient and cost-effective delivery. In addition, supplementary training is offered at the Army, Navy, and Air Force commands and program offices we visited, as well as at the Defense Contract Management Agency. While DAU provides a foundation for acquisition and career field knowledge in its certification training, various decentralized sources provide supplementary training more targeted to specific jobs, such as training on service-specific processes or databases and technical topics. Acquisition workforce members at the commands we visited provided the following examples of supplementary training. The contracting offices at the Army Aviation and Missile Command (AMCOM), Alabama, and the Air Force Aeronautical Systems Center (ASC), Ohio, provided unique training in the contracting area. AMCOM’s Contracting Center University teaches employees how to do day-to-day tasks associated with their job, such as price analysis, price negotiation, and how to use the Army Materiel Command-unique system for preparing contract documents. ASC’s “jump start” program teaches, reinforces, and supplements DAU certification training in the contracting career field with illustrative examples not provided in the computer-based contracting courses as well as offers an opportunity to interact with instructors and other students. The Naval Air Systems Command (NAVAIR), Maryland, provides supplementary training for DAWIA career fields. For example, in the program management career field, NAVAIR offers courses in configuration management and on NAVAIR’s technical directives system. Other acquisition workforce members provided examples of training from other federal agencies or commercial vendors, such as financial training from the Graduate School, United States Department of Agriculture, and Management Concepts, while others said they had brown bag lunches on various topics. Figure 2 below identifies DOD’s multifaceted training approach, both centralized and decentralized. The objective of the multifaceted training, in conjunction with the other two certification components—education and experience— is acquisition personnel with the training, education, and experience to perform the acquisition job. Implementation/Evaluation: As with other programs and services that agencies deliver, it is important that agencies collect program performance data during implementation and select an analytical approach that best measures the program’s effect to evaluate their training and development efforts. DAU collects customer feedback data during implementation and, during evaluation, uses the four-level Kirkpatrick model as an analytical approach for measuring training effectiveness. As a part of evaluating training, DAU conducts student end-of-course surveys (Level 1-Reaction) and, to a lesser degree, follow-up surveys of students and their managers 60 and 120 days, respectively, after course completion (Level 3-Behavior). DAU tracks the scores from the various surveys by survey section, such as job impact, and uses red-yellow-green stoplight indicators to identify areas of concern overall and by specific courses. DAU also administers pre- and post-training tests to measure learning (Level 2-Learning). To measure organizational impact (Level 4-Business Results), DAU employs measures of efficiency in evaluating and analyzing multiyear data, such as number of students completing courses, cost efficiency, and customer satisfaction trends. Level 4 assessments are resource intensive and have not been extensively used by DAU. DOD is deficient in two attributes of an effective training program— determining the skills and competencies of its workforce for planning and front-end analysis and using performance data to assess the results achieved through training efforts during evaluation. In March 2009, we reported that USD(AT&L) lacks complete information on the skill sets of the current acquisition workforce and whether these skill sets are sufficient to accomplish DOD’s missions. We recommended and DOD agreed to identify and update on an ongoing basis the number and skill sets of the total acquisition workforce—including civilian, military, and contractor personnel—that the department needs to fulfill its mission. Complete data on skill sets are needed to accurately identify its workforce gaps. Not having these data limits DOD’s ability to make informed workforce allocation decisions. We reported that USD(AT&L) was conducting a competency assessment to identify the skill sets of its current acquisition workforce, but also found that the lack of key data on the in-house acquisition workforce identified in the prior report still exists, though progress has been made. Since we released that report, DOD issued its Strategic Human Capital Plan Update in April 2010. According to DOD, progress was made in completing over 22,000 assessments involving 3 of the 15 career fields—program management, life-cycle logistics, and contracting career fields. The assessments completed to date represent approximately one-fifth of the personnel and career fields. Although DAU uses performance data—including customer feedback, number of students completing classes, and cost—to assess the results achieved through training efforts during evaluation, USD(AT&L) has only partially established metrics required in 2005 by its own guidance to provide senior leaders with appropriate oversight and accountability for management and career development of the acquisition workforce. The purpose of these metrics is to help DOD ensure a sufficient pool of highly qualified individuals for acquisition positions and, therefore, relates to the knowledge, skills, abilities, and size of the acquisition workforce, while the DAU performance data measure the performance of DAU against its goals. By incorporating these metrics into the training and development programs they offer, DOD can better ensure that they adequately address training objectives and thereby increase the likelihood that desired changes will occur in the acquisition workforce’s skills, knowledge, abilities, attitudes, or behaviors. AT&L programs lacking appropriate outcome metrics will be unable to demonstrate how the certification training contributes to organizational performance results. According to USD(AT&L)’s Deputy Director for Human Capital Initiatives, DOD has established some metrics to measure the size of the acquisition workforce that partially satisfy the requirements identified in DOD Instruction 5000.66. For example, DOD measures the cumulative number of civilian and military acquisition positions added as a result of in-sourcing acquisition functions performed by contractors. However, for metrics related to acquisition workforce proficiency and capability, there are no discernable targets, except improvement over the previous year. In addition, DOD’s April 2010 Strategic Human Capital Plan Update identified an initiative to establish certification goals as a management tool for improving workforce quality by June 10, 2010. According to the Deputy Director, certification goals are being discussed but they had not been established at the time of this report. Although DAU is unable to provide all training requested for acquisition workforce personnel and receives incomplete data for planning its training schedule, most personnel who need required DAWIA certification training receive it within required time frames. DAU plans the number and location of its classes based on data submitted by the Directors of Acquisition Career Management (DACM). However, DOD acquisition and training officials noted that data are generally incomplete when submitted and additional steps must be taken during the year to meet new requirements as they are identified. DAU has identified the need for an integrated student information system to improve the quality of the data and to provide greater insight into the workforce it supports. Additionally, though the number of DAU course graduates has grown over the past 5 years, DAU has not been able to provide enough class seats to meet the training requirements reported by military departments and defense components. DAU receives annual DACM data submissions for the course scheduling process, but the submissions do not provide the exact information needed to determine training demand for the acquisition workforce. DAU receives class requirements data annually from the DACM offices that it uses when developing course schedules to identify the number and location of DAU courses. DACM offices compile this information for all offices to establish the overall demand for each military department and the defense agencies for each DAU course. DAU and DACM offices work together throughout the process to improve the accuracy of this information when possible. According to DAU and DACM officials, however, data that are transmitted for schedule development do not fully reflect all demand for the upcoming year as new requirements arise once the schedule is developed. As a result additional planning and coordination between DAU and DACM offices is necessary to meet the training requirements of the acquisition workforce. For example, in fiscal year 2009, DAU received requests for 142 additional classes outside of the normal scheduling process. DAU was able to support 45 of these requested classes in such areas as program management, contracting, business management, and logistics. According to DAU officials, resources for additional classes are made available when other classes are cancelled. Also, DAU may reallocate allotted classroom seats among departments and agencies to fill additional training needs. DAU officials stated that data on selected acquisition support services that are currently performed by contractors who may transition to in-house DOD personnel are not adequate for planning specific training requirements. Though DOD has established goals for the number of contracted personnel to be converted, DAU officials noted that the exact time or training backgrounds of the personnel are not known in advance. DAU also uses acquisition workforce data provided quarterly by the DACM offices that include information such as the number of personnel in each acquisition career field as well as the career level, job titles, and status of progress against certification requirements of each workforce member to inform course demand management. According to DAU officials, these data provide a snapshot of the acquisition workforce and certification status, and they use this information to estimate the number, location, and type of classes needed by the acquisition workforce for certification. The data are compiled to create a demand management tool that provides DAU with an imprecise estimate of course requirements and are used to supplement and inform the estimates developed during the scheduling process. However, this demand management tool alone cannot be used by DAU to determine the exact number of classroom seats required each fiscal year. According to DAU officials, the workforce data collected may overstate training requirements because it does not account for training that has already been completed when individuals held a previous acquisition position, nor does it discern between multiple classes that may fulfill the same training requirement. Citing incomplete data for scheduling, as well as other deficiencies, DAU has taken steps to procure a student information system that will improve insight into and enhanced management of the defense acquisition workforce’s training needs. DAU began its market research for an integrated student information system in December 2007, viewed vendor presentations and demonstrations throughout 2008, and issued a request for proposal in August 2010. In the request for proposal, DAU identified the need for an integrated system for registration, student services, career management, schedule management, catalog requirements, recording transcripts, and reporting intended to improve its management of training needs and schedules. Without an integrated system, DAU states that it will remain reliant on a web of decentralized information that makes reporting and trend analysis difficult and time-consuming. A primary goal of the new system is to provide a comprehensive approach that improves, among other things, tracking of certification status and ensures training reaches the right people at the right time. DAU plans call for the contractor to complete implementation of this new student information system 24 months after the date of contract award, which had not been made as of September 2010. DCAI develops its training schedule based on the requirements expressed in the individual development plans and availability of DCAI resources. Registration for DCAI courses is prescribed and based largely on the individual developmental plans submitted by DCAA’s approximately 3,700- member auditing workforce in fiscal year 2009. Each year DCAA employees develop an individual development plan that lists DCAI courses as well as outside training deemed necessary with the input and approval of their supervisors. This information is input into a system that tracks course requirements and individuals’ status against training requirements. Individuals are automatically enrolled into the scheduled DCAI courses. Most of the acquisition workforce receives training within required certification time frames. At the end of fiscal year 2009, approximately 90 percent of the 133,103 members of the defense acquisition workforce had met certification requirements associated with their position or were within allowed time frames to do so. Acquisition workforce members we met with from all three military departments and the Defense Contract Management Agency (DCMA) noted challenges for receiving training at the time and location they desired, noting that local DAU locations would fill up quickly and that they would often have to register for courses multiple times prior to enrollment. However, acquisition staff and supervisors told us that this had little effect on being certified within the required time frames for their current positions. Nearly all of the remaining uncertified personnel required training to become certified. While additional training was needed, these individuals may also have been deficient in meeting education or experience requirements also needed for DAWIA certification. Furthermore, DACM officials noted that there could be a number of reasons these individuals had not received required training and stated that while some individuals may not have adequately planned for their training needs, other factors, such as deployment of military personnel abroad, may have limited their access to training. DCAA auditors do not face the same issues with DAWIA certification as the rest of the acquisition workforce. According to DCAI officials, this is largely because they do not have to coordinate demand for courses across several different agencies. All new hires are automatically enrolled in the courses required for level I and level II DAWIA certification. Additionally, DOD reported that approximately 99 percent of the auditing workforce had met certification requirements or were within allowed time frames to do so. By completing the mandatory learning track taught through DCAI classes, DCAA auditors complete certification training within required time frames. Even though 90 percent of the acquisition personnel who required certification training for their current position received training on time or were within allowed time frames to do so, DAU acknowledges that requests for acquisition workforce training as a whole submitted by the DOD components and military departments exceed what DAU can provide. DAU has incorporated expansion of training into its strategic plans. In its Strategic Plan for 2010-2015, DAU notes that it will play a key role in the USD(AT&L) acquisition workforce growth strategy. For example, USD(AT&L) efforts to grow, train, and develop the defense acquisition workforce will affect DAU’s strategic planning over the next several years. DAU notes that workforce growth goals put forth by the Secretary of Defense in April 2009 will increase the demand for DAU training and therefore affect how DAU plans for development of acquisition personnel, requiring careful consideration of resource allocation. The strategic plan also points out a number of other factors that will drive the demand for acquisition workforce training in the coming years, including annual workforce turnover, turnover related to Base Realignment and Closure, and support for new acquisition development needs. As part of its strategy, DAU has also established short-term goals to expand training capacity in its fiscal year 2010 Organizational Performance Plan, including expanding classroom training by 10,000 seats over fiscal year 2009 levels. DAU officials stated that they plan to increase capacity further to provide 54,000 classroom seats in fiscal year 2011. In addition, DAU established and has fulfilled a strategic goal of graduating 150,000 students from its Web-based courses annually. DAU has increased the total number of course graduates and classes in recent years to address demand for acquisition training. DAU has supported more classes than in the past, seeing an increase from 1,279 classroom courses in fiscal year 2005 to 1,505 in fiscal year 2009. In addition, from fiscal year 2005 through 2009, the number of individual graduates from DAU classroom and Web-based courses rose by approximately 77 percent (see fig. 3). To support increases in certification training demand due to workforce growth through new hiring and in-sourcing, DAU uses funding from the Defense Acquisition Workforce Development Fund to provide additional facilities and courses. Though the majority of funding is intended to support the hiring of new staff, DAU, military departments, and defense agencies received more than $225 million dollars to support new training and additional seats in fiscal years 2008 and 2009. Funds have been used by the military departments to support Army and Navy acquisition boot camps, the Air Force’s mission-ready contracting course, and other acquisition training developed by specific military commands. For example, funding was used to develop and implement the “jump start” program at the Air Force’s Aeronautical Systems Center that combines material taught through DAU’s Level I contracting courses with Air Force- specific information. The Defense Acquisition Workforce Development Fund has also been used by DAU to expand its teaching facilities, hire additional instructors, and schedule additional classes needed for DAWIA certification. DAU received nearly $165 million in fiscal years 2008 and 2009 to expand training. In fiscal year 2009, this funding permitted DAU to offer nearly 7,000 additional classroom seats in 31 different courses. DAU has also used these funds to develop new training—such as a 4-week course focusing on the Federal Acquisition Regulation that senior DOD contracting officials said was needed to provide a foundation for acquisition fundamentals—and to support acquisition professionals in the field through Service Acquisition Workshops and expanded contingency acquisition training. Despite these increased class offerings that have accommodated more graduates, DAU has not been able to provide the total number of classroom seats that are requested by the defense acquisition workforce through the DACMs. Classroom seats requested and class seats scheduled both increased from fiscal year 2007 through 2009. For example, in fiscal year 2009, DOD components requested 52,998 seats for the acquisition workforce across 66 different DAU classroom courses; DAU was able to allocate resources to meet 71 percent of this demand based on its annual budget. However, DAU made use of the Defense Acquisition Workforce Development Fund to provide additional classroom seats to meet the demand for training, allowing them to meet 87 percent of the workforce’s requirement in fiscal year 2009. Further, DAU data demonstrate that workforce personnel who require certification training for their current or future position within their career field constitute a large majority of classroom students graduating from DAU courses. DOD reports that most of the training-related recommendations from previous reviews—the Gansler Commission, the Panel on Contracting Integrity, and our prior report—have been fully implemented. We reviewed 19 recommendations addressing some aspect of acquisition training and found that 11 have been fully implemented, 4 have been partially implemented, and 4 have not been implemented but action has been taken. Two of the four Gansler Commission Report recommendations have been implemented; however, the Army and Office of the Secretary of Defense (OSD) need to take additional steps to ensure the Army “trains as it fights” and that DAU has the resources it needs to train the acquisition workforce. Nine of the 11 Panel on Contracting Integrity recommendations have been fully implemented. DOD has taken actions to address performance-based acquisitions training; however, DOD has not conducted a formal assessment of its guidance or the training. Also, on the basis of information from DOD, we could not determine whether it conducted a review of its Fraud Indicator Training and the Continuity Book/Contracting Office Transition Plan. One of the training-related recommendations we made to DCAA has been partially implemented, and three have not been implemented but action was taken. DCAA needs to take further steps to develop appropriate training for its auditors and it should seek outside expertise in doing so. In response to the Gansler Commission report, the Army and OSD have taken steps to improve training and implement the report’s recommendations. In 2007 the Gansler Commission made 4 overarching recommendations and, within those 4, the Commission described 35 more in-depth recommendations on Army acquisition and program management in expeditionary operations. Four of those in-depth recommendations pertain to training the DAWIA workforce. As shown in table 3, 2 of the commission’s training recommendations have been fully implemented, while the remaining 2 training recommendations require additional action. While DOD has taken action, additional steps are needed to fully implement the Gansler Commission training recommendations. The following is our rationale for ongoing efforts needed to continue for the Army and OSD to fully address the Gansler Commission training recommendations. “Train as we fight:” DOD officials stated that training exercises include contracting and logistics, incorporate lessons learned, and may include training for commanders, but we could not determine the extent to which they are included due to lack of documentation. The Army has mechanisms to capture lessons learned, but it is unclear how they are incorporated into training exercises. For example, the Expeditionary Contracting Command informally receives lessons learned from other Army commands and brigades, but we could not determine whether and how they are incorporated into training exercises because they are not tracked or formally documented. Provide DAU with needed resources to certify Army individuals requiring level I certification: DAU and the Army do not have the needed resources to emphasize level I DAWIA certification, according to DOD officials. DAU is not adequately funded to meet the acquisition training demand DOD- wide. For example, according to OSD officials, DAU is not fully funded to meet the fiscal year 2011 services and defense-wide agency demand for contracting level I courses. DAU projects meeting 60 percent of the fiscal year 2011 requested seats for these level I courses. The Army depends not only on DAU, but also on the Army Logistics University and the Air Force Mission Ready Airman Course to provide the contracting training needed to its active component personnel. DOD has not implemented all recommended actions related to defense acquisition workforce training included in the Panel on Contracting Integrity’s 2008 and 2009 reports to Congress. The Panel recommended a total of 49 actions to improve acquisition outcomes. Of these recommended actions, 11 specifically addressed acquisition training. See table 4 for a complete list of the recommended actions related to training included in the Panel’s reports to Congress in 2008 and 2009. While the Panel reported that all of the recommended actions had been completed, we determined that two of the recommended actions pertaining to training had not been fully implemented; we determined that one was not implemented, but action was taken, and one has been partially implemented. Assess effectiveness of DOD guidance and training for executing performance-based acquisition and perform gap analysis in conjunction with DAU: The report did not indicate if DOD conducted a formal assessment of departmental guidance or a gap analysis of training. The Panel’s Appropriate Contracting Approaches & Techniques Subcommittee worked with DAU to determine if training needed to be updated and collected examples of complex and high-dollar acquisitions and posted them to an Acquisition Community Connection Web site. The report also noted that DAU would select the best examples from this group for inclusion in its web-based integrated training tool. Review Fraud Indicator Training and Continuity Book/Contracting Office Transition Plan: The Panel report did not specifically address whether a formal review determined specific gaps in training, as recommended. In 2008, the Panel’s Contracting Integrity in a Combat/Contingent Environment Subcommittee reported that DOD incorporated transition planning and fraud indicator training into the Joint Contingency Contracting Handbook and updated DAU’s Joint Contingency Contracting Course. In addition to the recommendations above that are specific to training, the Panel recommended other actions that also affected training, one of which was not fully implemented. The Contractor Employee Conflicts of Interest Subcommittee reviewed and recommended that the Secretary of Defense issue guidance to clarify the circumstances in which contracts risk becoming improper personal services contracts. DOD formed an ad hoc team to respond to the recommendation which focused on establishing a Defense Federal Acquisition Regulation Supplemental case, DAU course updates, and a DOD instruction update. While the DOD instruction was published, the Panel’s report did not mention the status of the Defense Federal Acquisition Regulation Supplemental case or the DAU course updates. In 2009, we made four recommendations regarding DCAA auditor training, which have not been fully implemented (see table 5 for our full recommendations). Three of the recommendations have not been implemented but action was taken, and one has been partially implemented. As stated in our September 2009 report, DCAA faces many challenges and fundamental structural and cultural changes related to developing a strong management environment and human capital strategic plan. First, we recommended that once DCAA establishes a risk-based audit approach it will need to develop a staffing plan that identifies auditor resource requirements including training needs. Second, we recommended that DCAA establish a position for an expert or consult with an outside expert on auditing standards to shape audit policy, provide guidance, and develop training. While DCAA has taken steps to improve its audit training, such as implementing an initiative to identify the knowledge, skills, and competencies required for DCAA auditors and develop training, according to a DCAA official, it has not yet hired or consulted with an outside expert on auditing to shape its policies and provide guidance. Third, we recommended that DCAA develop agencywide training on government audit standards. Agency officials stated that as of July 2010, DCAA had developed a new online, introductory course on Generally Accepted Government Auditing Standards (GAGAS) all DCAA auditors are supposed to complete by September 30, 2010. We are reviewing the new course content and continue to work with DCAA on planned improvements to address the fundamental structural and cultural changes previously identified. Fourth, as DCAA’s audit quality assurance program identifies actions needed to address serious deficiencies and GAGAS noncompliance, we recommended that DCAA provide training and follow- up to ensure that appropriate corrective actions have been taken. DCAA has issued audit alerts and provided some guidance through periodic regional office and field office conferences, but has not yet incorporated this guidance into the body of its DCAI audit courses. DOD’s acquisition workforce training program demonstrates many attributes of effective training and development programs; however, there is room for further improvement. DOD recognizes the need to continue its efforts to assess competencies for its acquisition workforce. Importantly, if this effort is not completed, DOD will be limited in its ability to identify gaps in the skill sets of acquisition personnel, ultimately hampering its ability to effectively acquire the goods and services it needs to accomplish its mission. Notably, opportunities exist to improve the measurement of training’s impact on overall organizational performance. If DOD is to fully assess performance improvements, it needs to go beyond measuring the size of the workforce. To provide appropriate oversight of the proficiency and capability of its acquisition workforce, DOD will need metrics to measure skills, knowledge, and abilities, and how certification training contributes to organizational performance results. Furthermore, DAU faces challenges with the management and forecasting of training demand data for specific training courses, which hinders its ability to accurately plan the course schedule for the upcoming year in a manner that will facilitate getting the required training to acquisition workforce members in an efficient and cost-effective manner. Accurate and timely forecasting of acquisition workforce training requirements and the development of metrics for the proficiency of the workforce are imperative to support DOD’s initiatives to improve and grow the acquisition workforce. We recommend that the Secretary of Defense direct USD(AT&L) to take the following two actions to improve the development, implementation, and evaluation of acquisition workforce training. In order to demonstrate and track how training efforts contribute to improved acquisition workforce performance, establish milestones for the development of metrics to measure how acquisition certification training improves the proficiency and capability of the acquisition workforce. In order to improve DOD’s ability to identify specific acquisition training needs for planning and front-end analysis, establish a time frame for completion and ensure resources are available for implementing an enterprisewide, integrated student information system. We provided a draft of this report to DOD for comment. In written comments, DOD did not agree with our first recommendation and did agree with our second recommendation. DOD’s comments are discussed below and are reprinted in appendix III. DOD did not concur with our recommendation that it should develop milestones for the development of metrics to demonstrate and track how acquisition certification training improves the acquisition workforce performance. While DOD agreed that metrics should be used to measure the capability of the acquisition workforce, it believes developing milestones for such metrics is unnecessary because existing metrics can be used to this end. DOD states that workforce capability is a function of having the correct number of people working in the right areas with the proper level of education, training, and experience. Specifically, DOD notes five metrics used to measure size and composition of the workforce as well as the education, training, and experience levels of the individuals that comprise it. We recognize that metrics for measuring these elements are valuable for gaining insight into the degree to which required workforce personnel are being certified and filling needed positions. However, as we note in this report and in GAO’s guidance for assessing strategic training and development programs, training effectiveness must be measured against organizational performance. DOD’s existing metrics measure the outputs for certification training, not the outcome in terms of proficiency or capability of the acquisition workforce. Without outcome metrics, DOD cannot demonstrate how certification training contributes to improving organizational performance results. Given the scale and value of DOD acquisitions, we maintain that metrics that link training to acquisition performance outcomes should be developed by the department. We are sending copies of this report to the Secretary of Defense, the DOD Inspector General, and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or needhamjk1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Congress included a provision in the National Defense Authorization Act for Fiscal Year 2010 requiring us to report on the efficacy of the Department of Defense’s (DOD) acquisition and audit workforce training. To determine the efficacy of DOD’s acquisition and audit workforce training, we assessed (1) DOD’s capability to provide defense acquisition workforce certification training, (2) the extent that such training reaches members of DOD’s acquisition workforce, and (3) the extent that training recommendations from previous reviews, including the Gansler Commission, have been implemented. We were not able to report on the efficacy of training for the Defense Contract Audit Agency’s (DCAA) auditing career field because DCAA lacks a strategic plan. A strategic plan is a key document for assessing training programs using the strategic training efforts attributes. For this engagement, we focused on training for DOD personnel covered under the Defense Acquisition Workforce Improvement Act (DAWIA). To assess DOD’s capability to provide defense acquisition workforce certification training, we compared DOD’s certification training programs and processes with the attributes of effective training and development programs identified in GAO’s 2004 guide for assessing strategic training and development efforts in the federal government, which we identified as the most comprehensive source for attributes of effective training programs for our purpose. We interviewed officials at the Defense Acquisition University (DAU) and Defense Contract Audit Institute (DCAI) to obtain an understanding of their training programs and processes, and we obtained documents—such as briefings, guidance, strategic plans, and course catalogs—describing the training programs and processes. We interviewed the Directors of Acquisition Career Management (DACM) for the military services and defense agencies to obtain an understanding of their role in DOD training, to obtain their views on the effectiveness and usefulness of DAU training, and to find out whether supplementary training is provided by the military services. We interviewed the leaders of the functional integrated process teams that support the functional leaders of the 15 DAWIA career fields to obtain an understanding of their role in Acquisition, Technology, and Logistics’ (AT&L) process and criteria for reviewing and approving acquisition workforce training. In addition, we visited selected military commands and program offices within those commands to obtain customer perspectives on the effectiveness and usefulness of DAU training and to determine the use of supplementary training. For this purpose, we selected a nongeneralizable sample of one command from each military service based on the following criteria: (1) high level of procurement dollars spent in fiscal years 2008 and 2009 relative to other commands in their military service, based on data from the Federal Procurement Data System-Next Generation; (2) large number of DAU courses completed in fiscal years 2008-2009; and (3) proximity to a DAU regional office with an on-site dean. The commands we visited were the Army Aviation and Missile Command (AMCOM) in Huntsville, Alabama; the Air Force Materiel Command (AFMC) in Dayton, Ohio; and the Naval Air Systems Command (NAVAIR) at Patuxent River, Maryland. In selecting program offices to visit, we reviewed our assessment of selected weapon programs and consulted with the GAO team responsible for our assessment to determine which program offices would likely have a large cross-section of acquisition workforce personnel with whom to discuss training. We visited the following program offices: Joint Attack Munition Systems and Apache at AMCOM; Broad Area Maritime Surveillance Unmanned Aircraft System and E-2D Advance Hawkeye at NAVAIR; and Global Hawk Unmanned Aircraft System at AFMC. At AFMC, we also visited the Aeronautical Systems Center’s Contracting Directorate, and, at AMCOM, we visited the Contracting Center. We also visited Defense Contract Management Agency personnel to obtain their perspectives on DAU training and to find out about their use of supplementary training. Finally, we visited a nongeneralizable sample of two DCAA locations—the Alabama Branch Office in Huntsville, Alabama, and the Boston Branch Office in Boston, Massachusetts—to obtain the customers’ perspectives on DCAI training and determine the use of supplementary training. We did not examine the appropriateness of the certification training itself nor the content of courses required for certification. We did not assess the efficacy of training provided by supplementary training sources. To assess the extent to which acquisition training reaches appropriate acquisition personnel, we reviewed DAU and DCAI policies, and we received briefings from DAU and DCAI personnel concerning the determination of training requirements, resource allocation, and scheduling of classes. We reviewed and analyzed the training requirements for all defense acquisition career fields. We collected and analyzed defense acquisition workforce and training data maintained in the AT&L Data Mart system used by DAU for determining course demand and certification status of acquisition workforce members. This provided an understanding of the number of class requests received, class seats scheduled, and students who registered and completed these courses in past fiscal years. We also used these data to analyze the number and reasons for uncertified acquisition workforce personnel. We assessed the reliability of these data by reviewing data query information for specific data requests and interviewed knowledgeable officials who collect and use these data. We intended to focus all analysis of data for fiscal years 2005 through 2009; however, due to data reliability concerns, we limited portions of our analysis to data available for fiscal years 2007 through 2009. We determined that data were sufficiently reliable for the purposes of this report. We conducted interviews with DAU, DCAI, military department, and defense agency representatives who have a role in communicating or analyzing training requirements demand and training resource allocation to gain a fuller understanding of the processes and challenges faced when providing training for the defense acquisition workforce. In addition, we conducted interviews with acquisition workforce members and supervisors to understand the degree to which they are able to enroll in needed acquisition training and challenges they may face in completing this training. We interviewed DAU officials and obtained budget documents to determine DOD’s use of the Defense Acquisition Workforce Development Fund (Section 852 of the National Defense Authorization Act for Fiscal Year 2008) for training and for helping to meet training demand. To determine the extent to which training recommendations from previous reviews, including the Gansler Commission, have been implemented, we identified previous reviews with training recommendations, and we interviewed and obtained documentation from agency officials on the status of DOD’s implementation of the recommendations. Specifically, for Gansler Commission recommendations, we interviewed Defense Procurement and Acquisition Policy (DPAP) officials to determine the applicability of the training recommendations to the acquisition workforce, and we obtained the Office of the Secretary of Defense’s (OSD) and the Army’s status in implementing the recommendations and supporting documents, including reports detailing the recommendations and action items. We analyzed the supporting documents to assess the status, and, based on our review, we assigned one of the following six status assessments to each of the recommendations. (1) Fully Implemented. The entire wording of the action item has been fulfilled. (2) Partially Implemented. Only a portion of the action has been implemented. When the wording of the action item had multiple parts, if one part or a portion of a part had been implemented (but not all parts), we categorized the action item as “partially implemented.” (3) Not Implemented-Action Taken. No part of the action item has been implemented, but steps have been taken toward the completion of the action item. For example, if legislation had been introduced to address the action but had not been enacted into law, we categorized the action item as “not implemented-action taken.” (4) Not Implemented-No Action. No part of the action item has been completed, and no action has been taken to address the action item. For example, if the action item called for changes in legislation but no legislation has even been proposed, we categorized the action item as “not implemented-no action.” (5) Insufficient Information. Insufficient or conflicting information prevented us from determining the status of the action item. (6) Other. Implementation has occurred or action has been taken that, while not responsive to the letter of the action item, generally was consistent with its purpose. For example, if the action item states that a particular position should be created to coordinate an effort but the coordination is achieved without the creation of the position, we categorized the action item as “other.” We compared our assessment with OSD’s and the Army’s assessment, and, in making our final determination on implementation status, we provided OSD and Army officials the results of our initial determinations. The officials reviewed these results and provided us with additional, clarifying information that we considered and, when we believed appropriate, used in making our final determination. For the Panel on Contracting Integrity reports, we examined whether DOD had implemented the Panel’s recommendations in 2007 and 2008 by reviewing the 2007, 2008, and 2009 reports. Specifically, we compared the recommended actions from the 2007 report with the reported action in the 2008 report. The same comparative analysis was conducted using the recommended actions from 2008 and the 2009 report. We differentiated between recommendations that specifically mention training from those that did not, as well as recommendations in which training was involved in the implementation of the recommendation. We compared our assessment with the Panel’s assessment. We provided our analysis to DPAP officials to review and provide additional information that we considered in making our final determination. To determine whether DCAA has implemented GAO’s recommendations from a prior report, we interviewed officials at DCAA to understand what actions had been initiated in response to our recommendations. We conducted this performance audit from December 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to the report were Penny Berrier Augustine, Assistant Director; Johana Ayers; Alezandra Brady; Helena Brink; John Krump; Morgan Delaney Ramaker; Erin Schoening; Angela Thomas; Desiree Thorp; and Tom Twambly.
The President has announced his intention to improve the acquisition process, particularly given the half a trillion dollars the federal government spent in fiscal year 2009 on acquiring goods and services. The Department of Defense (DOD) spent $384 billion in fiscal year 2009 on goods and services--double what it spent in 2001. A high-quality workforce with the right competencies and skill sets will be critical to improving DOD acquisitions. GAO was mandated to determine the efficacy of DOD's certification training for its acquisition workforce. GAO assessed (1) DOD's capability to provide certification training, (2) the extent that such training reaches members of the workforce, and (3) the extent that previous training recommendations have been implemented. To conduct this work, GAO compared DOD's certification training to GAO guidance for effective training programs and analyzed policies, data, and previous reports on acquisition training. DOD's certification training program--provided by the Defense Acquisition University (DAU)--generally demonstrates the capability to provide effective training, though some attributes of an effective training program are lacking. DAU ensures that strategic and tactical changes are promptly incorporated into training; uses centralized and decentralized training approaches in design and development; collects data during implementation to ensure feedback on its training programs; and analyzes its training during evaluation. However, DOD lacks complete information on the skill sets of the current acquisition workforce and does not have outcome-based metrics to assess results achieved in enhancing workforce proficiency and capability through training efforts. In 2009, GAO recommended that the Secretary of Defense identify and update on an ongoing basis the number and skill sets of the total acquisition workforce--including civilian, military, and contractor personnel--that the department needs to fulfill its mission. DOD agreed and to date has completed about one-fifth of its workforce competency assessments. At the end of fiscal year 2009, 90 percent of DOD's acquisition workforce personnel had completed required certification training or were within required time frames to do so, according to DAU data. However, DAU reports that it cannot provide for all training requested for the entire acquisition workforce. DAU has offered more courses in recent years, and high-priority personnel--those needing to complete classes for certification in their current position--constitute the majority in DAU classes. DAU plans the number and location of its classes based on data that DOD officials noted are generally incomplete when submitted, and DAU must adapt during the year to support new requirements as they are identified. DAU has identified the need for a new, integrated student information system that will provide better insight into the workforce it supports and is in the early stages of its procurement. DOD reports that most of the training-related recommendations from previous reviews--the Gansler Commission, the Panel on Contracting Integrity, and a prior GAO report--have been fully implemented and some actions are still under way. DOD has either fully or partially implemented 15 of the 19 recommendations GAO reviewed. Both the Army and the Office of the Secretary of Defense have taken steps to respond to the Gansler Commission recommendations. Most of the recommendations made by the Panel on Contracting Integrity have been implemented, with the exception of two recommendations related to assessing guidance and reviewing a specific training topic. GAO made four recommendations pertaining to the Defense Contract Audit Agency's government auditing standards training and expertise, of which one has been partially implemented and three have not been implemented, but some actions have been taken. GAO recommends DOD establish milestones for developing metrics to measure how certification training improves acquisition workforce capability and a time frame for acquiring and implementing an integrated information system. DOD concurred with the second but not the first recommendation. GAO continues to believe DOD needs to develop additional metrics.
Medicare beneficiaries receive a wide range of services in hospital outpatient departments, such as emergency room and clinic visits, diagnostic services such as x-rays, and surgical procedures. To receive Medicare payment, hospitals report the services they provided to a beneficiary on a claim form they submit to CMS along with their charge for each service. For Medicare payment purposes, an outpatient service consists of a primary service and packaged services, the additional services or items associated with that primary service. CMS assigns each primary service to an APC, which may include other similar primary services, and pays the hospital at the designated APC payment rate, adjusted for variation in local wages. A hospital can receive multiple APC payments for a single outpatient visit if more than one primary service is delivered during that visit. On outpatient claims, hospitals identify the primary services they provided using a Healthcare Common Procedure Coding System (HCPCS) code, while they identify packaged services by either specific HCPCS codes or revenue codes that represent general hospital departments or centers, such as “pharmacy,” “observation room,” or “medical social services.” In addition to claims, hospitals submit annual cost reports to CMS that state their total charges and costs for the year and the individual hospital department charges and costs. As a first step in calculating the OPPS payment rate for each APC, CMS obtains hospital charge data on each outpatient service from the latest available year of outpatient claims. It calculates each hospital’s cost for each service by multiplying the charge by a cost-to-charge ratio that is computed from the hospital’s most recent cost report, generally on an outpatient department-specific basis. In those instances when a cost-to- charge ratio does not exist for an outpatient department in a given hospital, CMS uses one from a related outpatient department or the hospital's overall cost-to-charge ratio for outpatient department services. The cost of each primary service is then combined with the costs of the related packaged services to calculate a total cost for that primary service. On single-service claims, claims with one primary service, CMS can associate packaged services with the primary service and calculate a total cost for the service (see fig. 1). However, in the case of multiple-service claims, claims with more than one primary service, packaged services and their costs listed on the claim cannot be associated with particular primary services, as the costs of a packaged service may be associated with one or a combination of primary services (see fig. 2). For this reason, CMS excluded all multiple-service claims from rate setting prior to 2003. Beginning with the 2003 payment rates, CMS identified several methods that allowed it to convert some multiple-service claims into single-service claims, and therefore include them in its rate-setting calculations. After calculating the cost of each primary service assigned to an APC for each hospital claim, CMS arrays the costs for all claims and determines the median cost. To calculate the APC’s weight relative to other APCs, CMS compares the median cost of each APC to the median cost of APC 0601, a mid-level clinic visit, which is assigned a relative weight of 1.00. For example, if the median cost of APC 0601 is $100 and the median cost of “APC A” is $50, CMS assigns APC A a relative weight of 0.50. To obtain a payment rate for each APC, CMS multiplies the relative weight by a factor that converts it to a dollar amount. In addition, CMS annually reviews and revises the services assigned to a particular APC and uses the new APC assignments and the charges from the latest available outpatient hospital claims to recalibrate the relative weights, and therefore the payment rates. New drugs and devices are eligible to receive temporary pass-through payments for 2 to 3 years, depending on when each drug and device’s eligibility began. January 1, 2003 was the first time that pass-through eligibility expired for any drugs or devices. Once pass-through eligibility for these items expires, CMS determines whether they will be considered a primary service and assigned to a separate APC or a packaged service and included with the primary services with which they are associated on a claim. On January 1, 2003, 236 drugs and on January 1, 2004, 7 drugs expired from pass-through eligibility. For those drugs expiring in 2003, CMS designated any drug with a median cost exceeding $150 (115 drugs) as a primary service, and each was assigned to its own, separately paid APC. The remaining drugs (121 drugs), those with a median cost less than $150, were designated as packaged services, that is, their costs were included with the costs of the primary service they were associated with on the claim. CMS stated that many of these latter drugs were likely present on claims with a primary service of drug administration and were therefore packaged with the services assigned to the six drug administration APCs, that is, the three chemotherapy administration and three drug injection and infusion APCs. For these packaged drugs, although hospitals had previously received two payments, one for the administration of the drug or other primary service and an additional pass-through payment for the drug itself, when eligibility expires, hospitals receive only one payment for both the administration or other primary service and the packaged drug. In 2004, all 7 drugs for which pass-through eligibility expired were designated as primary services and assigned to their own, separately paid APCs. On January 1, 2003, the devices in 95 device categories, and on January 1, 2004, the devices in 2 device categories, expired from pass-through eligibility; in both years, the devices in all device categories were designated as packaged services and their costs were included with the costs of the primary service they were associated with on the claim. Although hospitals had previously received two payments, one for the procedure associated with the device and an additional pass-through payment for the device, hospitals then received only one payment for both the procedure and its associated device. The OPPS payment rates of former pass-through, separately paid drugs were generally lower than the pass-through payment rate, but the payment rates of former pass-through drugs and devices that were packaged cannot be evaluated, as these items are not assigned a distinct payment rate. In 2003, the payment rates for the 115 of 236 former pass-through drugs that were designated as separately paid drugs almost universally decreased from the pass-through payment rates. In 2004, for all 7 former pass-through drugs were designated as separately paid drugs and the payment rates for all 7 decreased. In 2003, for the remaining 121 pass-through drugs and the devices in 95 pass-through device categories and, in 2004, the devices in 2 device categories, all of which were packaged, we cannot evaluate the payment rate changes because individual payment rates were not assigned for these items when they expired from pass-through eligibility. In 2003, about half of all drugs for which pass-through eligibility expired (115 of 236) were assigned to their own APC and paid separately. For these drugs, we determined that over 90 percent had payment rates lower than 95 percent of AWP, the pass-through payment rate; the median payment rate was 55 percent of AWP. Individual payment rates were often considerably lower than AWP, but decreases varied substantially. For example, 1 drug had a payment rate of about 7 percent of AWP, while another had a payment rate of about 94 percent of AWP. However, 10 drugs had a payment rate of more than 100 percent of AWP. In addition, payment as a percentage of AWP varied by drug source. The majority of the 113 separately paid drugs that we analyzed were sole-source (70 percent), followed by multi-source (19 percent), and generic (10 percent). Generic drugs, which were paid the highest percentage of AWP of the three categories, had a median payment rate of 74 percent of AWP, multi-source drugs had a median of 56 percent of AWP, and sole-source drugs had a median of 53 percent of AWP. In 2004, all seven drugs for which pass-through eligibility expired were assigned to separate APCs. The individual payment rate of each drug was lower than the pass-through rate of 95 percent of AWP, with a median payment rate of 69 percent of AWP. All drugs were sole-source. Although the decreases in payments for these drugs were often substantial and varied greatly across individual drugs, some level of decrease is expected when pass-through eligibility expires and payments become based on hospital costs instead of AWP, which often exceeds providers’ acquisition costs. In 2001, we reported that certain drugs purchased by individual physicians were widely available at costs from 66 to 87 percent of AWP. In 2003, the costs of 121 former pass-through drugs and devices in 95 former pass-through device categories were packaged. Because CMS combines the costs of these items with the costs of the primary services with which they are associated on each claim, a specific payment rate for each of these drugs and devices does not exist. However, to indirectly assess the payment rates of packaged drugs and devices, we reviewed the payment rates of the APCs with which CMS stated they were likely packaged. CMS stated that, in 2003, former pass-through drug costs were most likely packaged with the six drug administration APCs. The payment rates for five of the six APCs decreased in 2003, when the costs of packaged former pass-through drugs were included, compared to 2002, when the costs of these drugs were not considered in the rate-setting calculations (see table 1). We are unable to determine why the costs of these APCs decreased because fluctuations in costs for any of the primary or packaged services in these APCs, in addition to the costs of the packaged drug, could have affected the payment rates. However, we would have expected that combining the costs of up to $150 of packaged former pass-through drugs with the costs of the primary services in these APCs would have increased the 2003 payment rates for more of these APCs as more than half of them are less than $150. To indirectly assess the payment rates of the devices in the 95 device categories expiring from pass-through eligibility in 2003, we reviewed APCs for which CMS determined that device costs made up at least 1 percent of the APC’s total cost. We found that the payment rates of these APCs varied substantially between 2002 and 2003, when the former pass-through device costs likely were included. For example, the payment rate of APC 0688 (Revision/Removal of Neurostimulator Pulse Generator Receiver) decreased by 48 percent, while the payment rate of APC 0226 (Implantation of Drug Infusion Reservoir) increased by 94 percent. However, we cannot attribute these fluctuations solely to the packaging of pass-through devices, because changes between 2002 and 2003 in the costs of the primary services and other packaged services assigned to the APCs also could have affected the payment rates. In 2004, the devices in two device categories expired from pass-through eligibility. The devices in one category were associated with services in one APC—APC 0674 (Prostate Cryoablation). The payment rate for this APC almost doubled. We were unable to examine the change in payment for the APC or APCs associated with the devices in the other expired pass- through device category because CMS did not identify the APC or APCs into which the costs of the devices in this device category were packaged. No type of hospital provided a disproportionate number of Medicare outpatient services associated with certain drugs and devices, as these services, as a percentage of total Medicare outpatient services, varied little among hospitals with differences in characteristics such as the presence of an outpatient cancer center, teaching status, urban or rural location, or outpatient service volume. In 2001, outpatient drugs were most often associated with APCs for chemotherapy administration services, and devices in pass-through device categories were most often associated with APCs for cardiac services. We found that chemotherapy administration and cardiac services composed only a small proportion of total Medicare outpatient services for all hospitals (see table 2). In addition, these proportions varied little among different types of hospitals. The OPPS rate-setting methodology used by CMS may result in APC payment rates for drugs, devices, and other outpatient services that do not uniformly reflect hospitals’ costs. Two areas of CMS’s methodology are particularly problematic. First, the claims that CMS uses to calculate hospitals’ costs and set payment rates may not be a representative sample of hospital claims, as CMS excluded many multiple-service claims when calculating the cost of OPPS services, including those with drugs and devices. The data CMS has available do not allow for the determination of whether excluding many multiple-service claims has an effect on OPPS payment rates. However, if the types or costs of services on excluded claims differ from the types or costs of services on included claims, the payment rates of some or all APCs may not uniformly reflect hospitals’ costs of providing those services. Second, when calculating hospitals’ costs, CMS assumes that, in setting charges within a specific department, a hospital marks up the cost of each service by the same percentage. However, not all hospitals use this methodology, and charge-setting methodologies for drugs, devices, and other outpatient services vary greatly across hospitals and across departments within a hospital. CMS’s methodology does not recognize hospitals’ variability in setting charges, and, therefore, the costs of services used to set payment rates may be under or overestimated. The claims CMS uses to calculate hospitals’ costs and set payment rates may not be a representative sample of hospital claims. When calculating the cost of all OPPS services, including drugs and devices, to set payment rates, CMS excluded over 40 percent of all multiple-service claims because CMS could not associate particular packaged services with a specific primary service on these claims. Drug and device industry representatives we spoke with raised concerns that certain drugs and devices are often billed on multiple-service claims that are largely excluded from rate setting. For example, they stated that chemotherapy administration and the drugs themselves are typically billed on a 30-day cycle; therefore, one claim likely includes chemotherapy administration and other primary and packaged services and is likely excluded from CMS’s rate-setting calculations. Device industry representatives we spoke with also asserted that multiple-service claims represent more complex, and therefore, potentially costlier, outpatient visits and excluding them from the rate-setting calculations underestimates the actual cost of a service. Because of the structure of the outpatient claim, the data CMS has available do not allow for the comparison of single-service claims and multiple-service claims to determine whether excluding many multiple- service claims has an effect on OPPS payment rates. It is possible that excluding many multiple-service claims has little or no effect on OPPS payment rates. However, if the types or costs of services on excluded claims differ from the types or costs of services on included claims, the payment rates of some or all APCs may not uniformly reflect hospitals’ costs of performing these services. The costs of drugs, devices, and other outpatient services that CMS calculates from hospital charges and uses to set payment rates may not uniformly approximate hospitals’ costs. CMS multiplies charges by hospital-specific cost-to-charge ratios to calculate hospitals’ costs, which decreases the charges by a constant percentage. This methodology is based on the assumption that each hospital marks up its costs by a uniform percentage within each department to set each service’s charge. However, we found that not all hospitals use this methodology to establish their charges, and that drug, device, and general charge-setting methodologies vary greatly among hospitals and even among departments within the same hospitals. We received information from 113 hospitals, although not all hospitals responded to each question. Of the 92 hospitals responding, 40 reported that they mark up all drug costs by a uniform percentage to establish charges, but 33 reported that they mark up low-cost drugs by a higher percentage and high-cost drugs by a lower percentage. Of 85 hospitals responding, 39 reported that they mark up all device costs using a uniform percentage, but 39 reported that they mark up low-cost devices using a higher percentage and high-cost devices using a lower percentage. In addition, 19 hospitals reported using other methods to set drug charges and 7 reported doing so for devices, such as a lower percentage markup for low-cost drugs and devices than for high-cost drugs and devices. (See appendix II for a more detailed description of hospital charge-setting methodologies.) Because CMS uses the same rate-setting methodology to determine drug and device payment rates as it uses for all other OPPS services, we also asked hospitals about more general charge-setting practices and found that they varied as well. To set base charges for clinic visits, hospitals reported using a wide variety of prices and methods, including cost, market comparisons, and the rates Medicare pays for outpatient services as well as payment rates for other benefit categories. To mark up clinic visits, 29 of the 45 hospitals responding used a uniform percentage increase; the remaining 16 hospitals reported using a variety of other methods, including using a higher percentage markup for low-cost visits than for high-cost visits. In addition to variation in charge-setting methodologies among hospitals, variation also can exist within an individual hospital. Hospital consultants told us that a single item can be assigned different charges if it is provided through more than one department within the same hospital. All 58 hospitals responding reported that they update their charges for inflation; 40 reported they did so annually, 12 did so at other times, and 6 did so both annually and at other times. Of the 58 hospitals that reported updating their charges for inflation, 25 reported that they apply a uniform, across-the-board percentage increase to all their charges, and 4 hospitals reported using both a uniform percentage and another type of increase. The remaining 29 hospitals reported using another method, such as applying an increase only to selected departments within the hospital. In addition, 33 of the 57 hospitals reported that they excluded some charges from these updates. The type of charges they excluded varied widely, but included drug and laboratory charges. The variation in methods hospitals use to update their charges reduces the likelihood that charges will uniformly reflect costs. CMS’s rate-setting methodology may result in OPPS payment rates that do not uniformly reflect hospitals’ costs of providing services. We identified two areas of this methodology that are of particular concern because not enough data are currently available to assess their impact. First, CMS excludes many multiple-service claims from its rate-setting calculations. To the extent that the types and costs of services on these claims are different from services on the claims included in the analysis, OPPS payment rates may not reflect hospitals’ costs. The current structure of the outpatient claims does not allow for an analysis to determine the effect of these exclusions. Second, in its rate-setting calculations, CMS assumes that each hospital uses a uniform markup percentage to set its charges within each department, although we found that hospitals use a variety of markup methodologies. Therefore, CMS’s application of a constant cost-to- charge ratio may not result in an accurate calculation. We recommend that the Administrator of CMS take the following three actions. First, the Administrator should gather the necessary data and perform an analysis that compares the types and costs of services on single-service claims to those on multiple-service claims. Second, the Administrator should analyze the effect that the variation in hospital charge-setting practices has on the OPPS rate-setting methodology. Third, the Administrator should, in the context of the first two recommendations, analyze whether the OPPS rate-setting methodology results in payment rates that uniformly reflect hospitals’ costs of the outpatient services they provide to Medicare beneficiaries, and, if it does not, make appropriate changes in that methodology. We received written comments on a draft of this report from CMS (see app. III). We also received oral comments from external reviewers representing seven industry organizations. They included the Advanced Medical Technology Association (AdvaMed), which represents manufacturers of medical devices, diagnostic products, and medical information systems; the American Hospital Association (AHA); the Association of American Medical Colleges (AAMC), which represents medical schools and teaching hospitals; the Association of Community Cancer Centers (ACCC); the Biotechnology Industry Organization (BIO), which represents biotechnology companies and academic institutions conducting biotechnology research; the Federation of American Hospitals (FAH), which represents for-profit hospitals; and the Pharmaceutical Research and Manufacturers of America (PhRMA). In commenting on a draft of this report, CMS stated that it has continued to review and refine its OPPS data collection and analysis. In responding to our recommendation that CMS gather the necessary data and perform an analysis comparing the types and costs of services on single-service claims to those on multiple-service claims, CMS stated that it is searching for ways to use more data from multiple-service claims, and it has made efforts in recent rate-setting analyses to include data from more of these claims. We noted these efforts in the draft report. CMS noted that there are continuing challenges and costs, to both the federal government and hospitals, to expanding its efforts in this area. In its comments, CMS suggested that an analysis could be done using an algorithm to allocate charges among multiple-service claims, but noted that such an approach could create further distortions in the relative weights. Our recommendation to CMS, however, is that the agency should gather additional data on the relative costs of services on single and multiple-service claims, rather than continuing to analyze existing data. In response to our recommendation that CMS analyze the effect of hospital charge-setting practices on the OPPS rate-setting methodology, CMS stated that we should recognize that its rate-setting methodology that converts hospital charges to costs using a cost-to-charge ratio does so at the level of an individual hospital department. The draft report noted the fact that CMS generally calculates cost-to-charge ratios on a department-specific basis; however, we have revised the report to highlight that information throughout. CMS also said that the application of cost-to-charge ratios to charges of a hospital has long been the recognized method of establishing reasonable costs for hospital services and was an important component of the cost-based reimbursement system that was used by Medicare to pay for hospital outpatient services before OPPS was implemented. While we agree that it was an important component of the prior payment system, we believe the implementation of the current payment system has changed the relevance of applying cost-to-charge ratios to determine hospitals’ costs. OPPS, rather than reimbursing individual hospitals on the basis of their costs of providing outpatient services, uses costs from individual hospitals to construct a prospective payment system that sets rates for individual services that apply to all hospitals. Finally, CMS stated that the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 specified that cost-to-charge ratios would be used to set payment amounts for brachytherapy sources; however, a discussion of brachytherapy payment is outside of the scope of this report. In response to our recommendation that CMS analyze whether the OPPS rate-setting methodology results in payment rates that uniformly reflect hospitals’ costs of the services they provide to Medicare beneficiaries and make any appropriate changes in the methodology, CMS stated that it will consider our recommendations as it continues to assess and refine the rate- setting methodology. CMS said that it believes it has made great strides on this issue and is continuing to pursue the analyses necessary to create means by which all claims can be used to set the OPPS relative payment weights and rates. CMS also made technical comments, which we incorporated where appropriate. Industry representatives generally agreed with the findings, conclusions, and recommendations in the draft report. Comments on specific portions of the draft report centered on three areas: payment rates of former pass- through drugs and devices, provision of services associated with drugs and devices, and CMS’s rate-setting methodology. Several industry representatives commented on our analysis of Medicare payment for former pass-through drugs and devices. AHA stated that although when drugs have expired from pass-through status their payment rates may have decreased, they are now more consistent, relative to costs, with the payment rates for other OPPS services. PhRMA agreed with our finding that the payment rates for former pass-through drugs and devices that are packaged cannot be evaluated and suggested that we recommend that CMS specifically address this problem. Industry representatives commented on our analysis of the provision of services associated with drugs and devices among different types of hospitals. ACCC agreed with the percentages of Medicare outpatient services related to chemotherapy administration and cardiac services in the draft report; however, it stated that it believed that these percentages demonstrated that large hospitals provided a disproportionate share of chemotherapy administration. ACCC and AAMC stated that these percentages also demonstrated that major teaching hospitals provided a disproportionate share of chemotherapy administration services. In addition, both groups suggested that we perform other analyses by type of hospital, such as the proportion of total payments, proportion of total services excluding clinic services, or absolute number of services for which chemotherapy administration and cardiac services accounted. Many of the reviewers addressed our finding that CMS’s rate-setting methodology may result in OPPS payment rates that do not uniformly reflect hospitals’ costs. Representatives from AAMC, ACCC, AdvaMed, BIO, and PhRMA agreed with our conclusion that CMS may not be using a representative sample of claims to set payment rates and that CMS’s rate- setting methodology does not account for variation in hospital charge- setting practices. Several of these representatives suggested we analyze and discuss other factors that could further skew CMS’s calculation of hospital costs, such as its use of incorrect or incomplete claims in rate setting. Regarding the suggestion that we specifically recommend that CMS address the issue that the payment rates for former pass-through drugs that are packaged and former pass-through devices cannot be evaluated, we believe that our more general recommendation allows the agency the flexibility to determine the most appropriate analyses for examining the rate-setting methodology. With respect to the comment that the percentages of Medicare outpatient services accounted for by chemotherapy administration demonstrate that certain types of hospitals provide a disproportionate share of these services, we disagree. As noted in the draft report, we found that these percentages differ by type of hospital, but the differences are not substantial, as all types of hospitals provided a relatively small proportion of these services. No type of hospital provided a disproportionately large number of these services. We analyzed the proportion of services, rather than payments as industry representatives suggested, because we believe that is the better analysis for determining whether a certain type of hospital provides a disproportionate share of these services. We did not analyze the proportion of total services except for clinic services or the absolute number these services made up, as we do not believe such an analysis would accurately and comparably reflect potential differences between hospitals for all outpatient services they perform. The industry representatives also made technical comments, which we incorporated where appropriate. We are sending a copy of this report to the Administrator of CMS. The report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others on request. If you or your staff have any questions, please call me at (202) 512-7119. Another contact and key contributors to this report appear in appendix IV. We analyzed Medicare claims data used by the Centers for Medicare & Medicaid Services (CMS) to set the 2003 outpatient prospective payment system (OPPS) payment rates. In addition, we analyzed drug average wholesale prices (AWPs), drug sources (sole-source, multi-source, or generic), and OPPS payment rates obtained from CMS. We interviewed officials at CMS and representatives from the American Hospital Association, Association of American Medical Colleges, Association of Community Cancer Centers (ACCC), Federation of American Hospitals, Greater New York Hospital Association, as well as from one large hospital system, one large hospital alliance, and five individual hospitals. In addition, we spoke with representatives from the Advanced Medical Technology Association, Biotechnology Industry Organization, California Healthcare Institute, Pharmaceutical Research and Manufacturers of America, as well as from seven drug manufacturers and three device manufacturers. We also spoke with consultants that advise hospitals on setting their charges. To compare payment for drugs to previous pass-through payments, we relied on information provided by CMS on drug sources and 2003 and 2004 drug payment rates, and on CMS’s calculations of the AWPs for these drugs, which we supplemented with our own calculations. From CMS, we obtained the drug source and the payment rate for the 115 drugs and the 7 drugs whose pass-through eligibility expired as of January 1, 2003 and January 1, 2004, respectively, that were assigned to separate ambulatory payment classification (APC) groups. We used Medicare’s January 2003 and January 2004 Single Drug Pricer files to determine the 2003 and 2004 AWPs, respectively, for most of the drugs. For the 37 drugs that were not included in the 2003 Single Drug Pricer file, we used the 2002 Drug Topics Red Book, published by Thomson Medical Economics, to calculate their AWPs. For the 2 drugs that were not in the 2004 Single Drug Pricer file, we used the 2003 Drug Topics Red Book, published by Thomson PDR, to calculate their AWPs. We calculated payment rates as a percentage of AWP for all drugs in 2003 and 2004. From our 2003 analysis, we excluded 1 multi-source drug for which we calculated an AWP from the 2002 Drug Topics Red Book that was inconsistent with the 2002 AWP CMS provided to us and another multi-source drug with an AWP of $0.34, but a payment rate of almost 29,000 percent of that amount. To determine whether a particular type or types of hospitals provide a disproportionate number of outpatient services associated with drugs and devices, we used the outpatient claims file that CMS used to calculate the 2003 OPPS payment rates. To perform our own data reliability check of this file, we examined selected services to determine the reasonableness of their frequency in the data set, given the population of the beneficiaries receiving services and the setting in which they are delivered. We determined the data were reasonable for our purposes. Using the claims, we determined which outpatient services were most often associated with drugs and devices and found that drugs were most often associated with chemotherapy administration services and devices were most often associated with cardiac services. Then, also using the claims, we compared proportions of chemotherapy administration and cardiac services for all hospitals, as well as for cancer center and noncancer center hospitals, major teaching and other hospitals, urban and rural hospitals, and hospitals with different outpatient service volumes. We included only those hospitals identified in CMS’s 2003 OPPS impact file, a data file CMS constructs to analyze projected effects of policy changes on various hospital groups, such as urban and rural hospitals. We excluded hospitals with fewer than 1,100 total outpatient services, or approximately 3 outpatient services per day, as we believe such hospitals are not representative of most hospitals with outpatient departments. We defined cancer center hospitals as those hospitals that were members of ACCC as of February 28, 2003, the latest data available when we performed this analysis. We obtained the membership list from the ACCC. Using the September 2002 Medicare Provider of Services file and information obtained directly from the ACCC, we determined the Medicare provider numbers of ACCC members to identify claims billed by these hospitals. We defined major teaching hospitals as those hospitals having an intern/resident-to-bed ratio of 0.25 or more. We defined the urban or rural location of a hospital based on the urban/rural location indicator in the Medicare hospital OPPS impact file from calendar year 2003. We defined volume based on the number of services a hospital provided, also as indicated in the impact file. Small volume hospitals were those with fewer than 11,000 services, medium volume hospitals were those with at least 11,000 services but fewer than 43,000 services, and large volume hospitals were those with at least 43,000 services. We interviewed representatives from hospitals, hospital associations, and drug and device manufacturers and the associations that represent them to obtain information about hospital charging practices. We received information on charge-setting practices from 5 hospitals whose officials we interviewed. We indirectly received information from 50 other hospitals through association and industry representatives with whom we spoke. Finally, we contacted seven state hospital associations in geographically diverse areas not well represented in our previous sample to identify their members’ charging practices. Some hospitals responded directly to us and others responded to their state association, which forwarded the responses to us. We received responses from 58 hospitals. The 113 hospitals from which we received information are not a statistically representative sample of all hospitals. We conducted our work from March 2003 through August 2004 in accordance with generally accepted government auditing standards. We received information from 113 hospitals, although not all hospitals responded to each question. Hospitals reported using a variety of methods to set the base charges for their clinic visit services (see table 3). To set the base charges for drugs, 25 of 57 hospitals responding reported that they used acquisition cost, 30 used the drug’s average wholesale price (AWP), and 2 used a combination of acquisition cost and AWP. To set the base charges for devices, 55 of 57 hospitals responding reported that they used acquisition cost. After setting base charges, 29 of 45 hospitals responding reported that they marked up all of their clinic visit services by the same percentage increase, although they reported using a variety of other methods as well. To mark up base charges for drugs and devices, most hospitals responding used either the same percentage for all drugs and for all devices, or used a graduated percentage markup, marking up low-cost items by a higher percentage (see table 4). In addition, 24 of the 57 hospitals responding reported that they include nonproduct costs as a portion of their drug charges, and 25 of 57 responding reported that they include nonproduct costs as a portion of their device charges. The most common nonproduct costs included were administrative and overhead costs. Of the 24 including nonproduct costs in drug charges, 12 reported that they do so by adding an additional percentage of the drug acquisition cost to the drug charge. Of the 25 including nonproduct costs in device charges, 16 reported that they do so by adding an additional percentage of the device acquisition cost to the device charge. However, the amount of the nonproduct costs as a percentage of the charges varied widely among hospitals. Of the 24 hospitals including nonproduct costs in drug charges, 16 reported that the amount varied by the route of administration for the drug, such as intravenous or intramuscular administration. Of the 58 hospitals responding, all reported that they update their charges for inflation; 40 reported they did so annually, 12 did so at other times, and 6 did so both annually and at other times. While many used a standard across-the-board percentage increase to update their charges, the majority used other methods. In addition, 33 of the 57 hospitals responding reported that they exclude certain charges from these updates. The types of services whose charges they excluded, such as drug, laboratory, and room charges, varied widely. Finally, 49 of 58 hospitals responding reported that they periodically review all their charges. Beth Cameron Feldpush, Joanna L. Hiatt, Maria Martino, and Paul M. Thomas made major contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Under the Medicare hospital outpatient prospective payment system (OPPS), hospitals receive a temporary additional payment for certain new drugs and devices while data on their costs are collected. In 2003, these payments expired for the first time for many drugs and devices. To incorporate these items into OPPS, the Centers for Medicare & Medicaid Services (CMS) used its rate-setting methodology that calculates costs from charges reported on claims by hospitals. At that time, some drug and device industry representatives noted that payment rates for many of these items decreased and were concerned that hospitals may limit beneficiary access to these items if they could not recover their costs. GAO was asked to examine whether the OPPS rate-setting methodology results in payment rates that uniformly reflect hospitals' costs for providing drugs and devices, and other outpatient services, and if it does not, to identify specific factors of the methodology that are problematic. The rate-setting methodology used by CMS may result in OPPS payment rates for drugs, devices, and other services that do not uniformly reflect hospitals' costs of providing those services. Two areas of the methodology are particularly problematic. The hospital claims for outpatient services that CMS uses to calculate hospitals' costs and set payment rates may not be a representative sample of all hospital outpatient claims. For Medicare payment purposes, an outpatient service consists of a primary service and the additional services or items associated with the primary service, referred to as packaged services. CMS has excluded over 40 percent of multiple-service claims, claims that include more than one primary service along with packaged services, when calculating the cost of all OPPS services, including those with drugs and devices. It excludes these multiple-service claims because, when more than one primary service is reported on a claim, CMS cannot associate each packaged service with a specific primary service. Therefore, the agency cannot calculate a total cost for each primary service on that claim, which it would use to set payment rates. The data CMS has available do not allow for a determination of whether excluding many multiple-service claims has an effect on OPPS payment rates. However, if the types or costs of services on excluded claims differ from those on included claims, the payment rates of some or all services may not uniformly reflect hospitals' actual costs of providing those services. In addition, in calculating hospitals' costs, CMS assumes that, in setting charges within a specific department, a hospital marks up the cost of each service by the same percentage. However, based on information from 113 hospitals, GAO found that not all hospitals use this methodology: charge-setting methodologies for drugs, devices, and other outpatient services vary greatly across hospitals and across departments within a hospital. CMS's methodology does not recognize hospitals' variability in setting charges, and therefore, the costs of services used to set payment rates may be under- or overestimated.
The Park Service is the caretaker of many of the nation’s most precious natural and cultural resources. Today, more than 130 years after the first national park was created, the National Park System has grown to include 390 units covering over 84 million acres. These units include a diverse mix of sites—now in more than 20 different categories. The Park Service’s mission is to preserve unimpaired the natural and cultural resources of the National Park System for the enjoyment of this and future generations. Its objectives include providing for the use of the park units by supplying appropriate visitor services and infrastructure (e.g., roads and facilities) to support these services. In addition, the Park Service protects its natural and cultural resources (e.g., preserving wildlife habitat and Native American sites) so that they will be unimpaired for the enjoyment of future generations. The Park Service receives its main source of funds to operate park units through appropriations in the ONPS account. The Park Service chooses to allocate funds to its park units in two categories—one for daily operations, and another for specific, non-recurring projects. Daily operations allocations for individual park units are built on park units’ allocation for the prior year. Park units receive an increased allocation for required pay increases and may request specific increases for new or higher levels of ongoing operating responsibilities, such as adding additional law enforcement rangers for increased homeland security protection. As is true for other government operations, the cost of operating park units will increase each year due to required pay increases, the rising costs of benefits for federal employees, and rising overhead expenses such as utilities. The Park Service may provide additional allocations for daily operations to cover all or part of these cost increases. If the continuation of operations at the previous year’s level would require more funds than are available, park units must adjust either by identifying efficiencies within the park unit, use other authorized funding sources such as fees or donations to fund the activity, or reduce services. Upon receiving their allocations for daily operations each year, park unit managers exercise a great deal of discretion in setting operational priorities. Generally, 80 percent or more of each park unit’s allocation for daily operations is used to pay the salaries and benefits of permanent employees (personnel costs). Park units use the remainder of their allocations for daily operations for overhead expenses such as utilities, supplies, and training, among other things. In addition to daily operations funding, the Park Service also allocates project-related funding to park units for specific purposes to support its mission. For example, activities completed with Cyclic Maintenance and Repair and Rehabilitation funds include re-roofing or re-painting buildings, overhauling engines, refinishing hardwood floors, replacing sewer lines, repairing building foundations, and rehabilitating campgrounds and trails. Park units compete for project allocations by submitting requests to their respective regional office and headquarters. Regional and headquarters officials determine which projects to fund. While an individual park unit may receive funding for several projects in one year, it may receive none the next. Park units are authorized to collect revenue from outside sources such as visitor fees and donations—although how they are used may be limited to specific purposes. Since 1996, the Congress has provided the park units with authority to collect fees from visitors and retain these funds for use on projects to enhance recreation and visitor enjoyment, among other things. Since 2002, the Park Service has required park units to spend the majority of their visitor fees on deferred maintenance projects, such as road or building repair. The Park Service also receives revenue from concessionaires under contract to perform services at park units—such as operating a lodge—and cash or non-monetary donations from non-profit organizations or individuals. These funds may vary from year to year and, in the case of donations, may be accompanied by stipulations on how the funds may be used. Overall appropriations for the ONPS account—including the amounts the Park Service allocated for daily operations and projects—rose in both nominal and inflation-adjusted dollars overall from fiscal year 2001 through 2005. Appropriations increased in nominal terms from about $1.4 billion in fiscal year 2001 to almost $1.7 billion in fiscal year 2005, an average annual increase of about 4.9 percent (i.e., about $68 million per year). After adjusting these amounts for inflation, the average annual increase was about 1.3 percent or almost $18 million per year. By contrast, the Park Service’s overall budget authority increased to about $2.7 billion in 2005 from about $2.6 billion in 2001, an average increase of about 1 percent per year. In inflation adjusted dollars, the total budget authority fell by an average of about 2.5 percent per year. Figure 1 shows the appropriations for the ONPS account from fiscal years 2001 through 2005. The Park Service’s total allocation for daily operations for park units increased overall in nominal dollars but declined slightly when adjusted for inflation from fiscal year 2001 through 2005. As illustrated in figure 2, overall allocations for daily operations for park units rose from about $903 million in fiscal year 2001 to almost $1.03 billion in fiscal year 2005—an average annual increase of about $30 million, or about 3 percent. After adjusting for inflation, the allocation for daily operations fell slightly from about $903 million in 2001 to about $893 million in 2005—an average annual decline of about $2.5 million, or 0.3 percent. The fiscal year 2005 appropriation for the ONPS account included an additional $37.5 million over the amounts proposed by the House and Senate for the ONPS account, to be used for daily operations. The conference report accompanying the appropriation stated that the additional amount was to be used for (1) a service-wide increase of $25 million and (2) $12.5 million for visitor services programs at specific park units. Allocations for projects and other support programs increased overall in both nominal and inflation-adjusted dollars. These allocations rose from about $478 million in 2001 to about $641 million in 2005—an average annual increase of about 7.7 percent, or about $36.5 million. When adjusted for inflation, the increase was 3.9 percent, or about $18.7 million per year. Figure 3 shows allocation trends of projects and other support programs for the Park Service from fiscal years 2001 through 2005. Three programs that include project funding for individual park units—Cyclic Maintenance, Repair and Rehabilitation, and Inventory and Monitoring— account for over half of the increase for the project and support program allocations. As a percentage of total project and support program funding, funding for these programs rose to 31 percent in 2005 from 23 percent in 2001. For example, Cyclic Maintenance program funding increased from $34.5 million in 2001 to $62.8 million in 2005—an average annual increase of 16.2 percent in nominal terms or 12.1 percent when adjusted for inflation. Increases in the Cyclic Maintenance and Repair and Rehabilitation programs reflect an emphasis on the effort for the Park Service to reduce its estimated $5 billion maintenance backlog. Increases in the Inventory and Monitoring Program reflect an emphasis on protecting natural resources primarily through an initiative called the Natural Resource Challenge. Visitor fees are also used to support park units. Overall, the Park Service collected about $717 million in visitor fees in addition to their annual appropriation for operations from 2001 through 2005, increasing from about $140 million to about $147 million in 2005 (an average annual increase of about 1 percent); however, in inflation-adjusted dollars, the Park Service collected about $670 million in visitor fees, falling from about $140 million in 2001 to about $127 million 2005 (an average annual decline of over 2 percent). Overall, the Park Service collected an average of about $143 million per year in nominal terms or about $134 million per year when adjusted for inflation. Visitor fee revenue depends on several factors, including the number of visitors to each park unit, the number of national passes purchased, and the amount each park charges for entry and services. All 12 park units we visited received allocations for projects from fiscal years 2001 through 2005 that varied among years and among park units. Allocations for daily operations for the 12 park units we visited also varied. On an average annual basis, each unit experienced an increase in daily operations allocations, but most experienced a decline in inflation- adjusted terms. Officials at each park believed that their daily operations allocations were not sufficient to address increases in operating costs and new Park Service management requirements. To manage within available funding resources, park unit managers also reported that, to varying degrees, they made trade-offs among the operational activities—which in some cases resulted in reducing services in areas such as education, visitor and resource protection, and maintenance activities. Park officials also reported that they increasingly relied on volunteers and other authorized funding sources to provide operations and services that were previously paid with allocations for daily operations from the ONPS account. Park units use project-related allocations for such things as rehabilitating structures, roads, and trails; and inventorying and monitoring natural resources. The allocations for projects at the 12 park units totaled $76.8 million from 2001 through 2005. Allocations varied from park to park and year to year because these allocations support non-recurring projects for which park units are required to compete and obtain approval from Park Service headquarters or regional offices. For example, at Grand Canyon National Park, allocations for projects between 2001 and 2005 totaled $6.7 million. However, during that time, the amount fluctuated from $824,000 in 2001 to $1.9 million in 2004 and $914,000 in 2005. Appendix I shows project-related allocations and their fluctuations from fiscal years 2001 through 2005 for the 12 parks we visited. All twelve park units experienced an annual average increase, in nominal terms, in allocations for daily operations; however, when adjusted for inflation, 8 of the 12 parks we visited experienced a decline ranging from less than 1 percent to approximately 3 percent. For example, Yosemite National Park’s daily operations allocations increased from $22,583,000 in 2001 to $22,714,000 in 2005, less than an average of 1 percent per year. However, when adjusted for inflation, the park’s allocation for daily operations fell by about 3 percent per year. Daily operations allocations at the remaining four parks increased after adjusting for inflation, ranging from less than 1 percent to about 7 percent. For example, Acadia National Park’s daily operations allocations increased from $4,279,000 in fiscal year 2001 to $6,498,000 in fiscal year 2005, an average annual increase of about 11 percent in nominal terms and about 7 percent when adjusted for inflation. Park officials explained that although the daily operations allocation substantially increased over this period, most of the increase was for new or additional operations. To illustrate, in 2002, Acadia acquired the former Schoodic Naval Base. The increases in allocations for daily operations were to accommodate this added responsibility rather than for maintaining operations that were in existence prior to the acquisition. Park unit officials reported that required salary increases exceeded the allocation for daily operations, and rising utility costs have reduced their flexibility in managing daily operations allocations. Park Service headquarters officials reported that from 2001 through 2005, the Park Service paid personnel cost increases enacted by the Congress. For example, from fiscal years 2001 through 2005, Congress enacted salary increases of about 4 percent per year for federal employees. Park Service officials reported that the Park Service covered these salary increases with appropriations provided in the ONPS account. The Park Service allocated amounts to cover about half of the required increases, and park units had to reduce spending to compensate for the difference. As a consequence of the increases, park units had to eliminate or defer spending in order to accommodate the increases. Officials at several park units told us that since 2001, they have refrained from filling vacant positions or have filled them with lower-graded or seasonal employees. For example, in an effort to continue to perform activities that directly impact visitors—such as cleaning restrooms and answering visitor questions—officials at Sequoia and Kings Canyon National Parks stated that they left several high-graded positions unfilled in order to hire a lower graded workforce to perform these basic operational duties. Officials at most park units also told us that when positions were left vacant, the responsibilities of the remaining staff generally increased in order to fulfill park obligations. In addition to increasing personnel costs, officials at many of the parks we visited explained that rising utility costs caused parks to reduce spending in other areas. For example, at Grand Teton National Park, park officials told us that to operate the same number of facilities and assets, costs for fuel, electricity, and solid waste removal increased from $435,010 in 2003 to $633,201 in 2005—an increase of 46 percent, when adjusted for inflation. Officials told us that, as a result, their utility budget for fiscal year 2005 was spent by June 2005—three months early. In August, the park accepted the transfer requests of two division chiefs and used the salaries from these vacancies to pay for utility costs for the remaining portion of the year. Officials at some parks attributed increased utility costs to new construction that was generally not accompanied with a corresponding increase to their allocation for daily operations. Officials at most of the parks we visited also told us that their park units generally did not receive additional allocations for administering new Park Service policies directed at reducing its maintenance backlog, implementing a new asset management strategy, or maintaining specified levels of law enforcement personnel (referred to as its “no-net-loss policy”), which has reduced their flexibility in addressing other park priorities. While officials stated that these policies were important, implementing them without additional allocations reduced their management flexibility. For example, since 2001, the Park Service has placed a high priority on reducing its currently estimated $5 billion maintenance backlog. In response, the Park Service, among other things, set a goal to spend the majority of its visitor fees on deferred maintenance projects—$75 million in 2002 increasing to $95 million in 2005. Officials at several park units report that they have used daily operations allocations to absorb the cost of salaries for permanent staff needed to oversee the increasing number of visitor fee-funded projects. Park officials reported that the additional administrative and supervisory tasks associated with these projects add to the workload of an already-reduced permanent staff. Furthermore, while the Park Service may use visitor fees to pay salaries for permanent staff that manage and administer projects funded with visitor fees, it has a policy prohibiting such use. Instead, these salaries are paid using allocations for daily operations which reduce the amount of the allocation available for visitor services and other activities and limit the park units’ ability to maintain these services and activities. To address differences between allocations for daily operations and expenses, officials at the park units we visited reported that they reduced or eliminated some services paid with daily operations allocations— including some that directly affected visitors and park resources. Park officials at some of the parks we visited told us that before reducing services that directly affect the visitor, they first reduced spending for training, equipment, travel, and supplies paid from daily operations allocations. However, most parks reported that they did reduce services that directly affect the visitor, including reducing visitor center hours, educational programs, basic custodial duties, and law enforcement operations, such as back-country patrolling. Furthermore, when funds allocated for daily operations were not sufficient to pay for activities that were previously paid with this source, the park units we visited reported that they deferred activities or relied on other authorized funding sources such as allocations for projects, visitor fees, donations from cooperating associations and friends groups, and concessions fees. From 2001 to 2005, some parks delayed performing certain preventative maintenance activities formerly paid with allocations for daily operations until other authorized funding sources, such as project funds (including funds for cyclic maintenance, repair and rehabilitation, and visitor fees) could be found and approved. Rather than eliminating or not performing daily operational activities, some parks used volunteers and funding from authorized sources such as donations from non-profit partners and concessionaires’ fees to accomplish activities that were formerly paid with daily operations funds. Officials at several park units said that they increasingly depend on donations from cooperating associations to pay for training and equipment and rely on their staff and volunteers to provide information and educational programs to visitors that were traditionally offered by park rangers. Funds from these sources can be significant, but they are subject to change from year to year. Officials at several park units expressed concern about using funding from other authorized sources to address needs—not only because the funds can vary from year to year, but also because these partners’ stipulations on how their donations can be used may differ from the parks’ priorities. As a result, relying on these sources for programs that require a long term funding commitment could be problematic. We identified three management initiatives that the Park Service has undertaken to address the fiscal performance and accountability of park units and to better manage within their available resources: the Business Plan Initiative (BPI), the Core Operations Analysis (COA), and the Park Scorecard. Each initiative operates separately and is at various stages of development and implementation. Table 2 in appendix II summarizes each of the three initiatives and their stages of implementation. Through the BPI process, park unit staff—with the help of business interns from the Student Conservation Association—identify all sources and uses of park funds to determine funding levels needed to operate and manage park units. Using this information, park unit managers develop a 5-year business plan to address any gaps between available funds and park unit operational and maintenance needs. The process used in the BPI involves 6 steps, completed over an 11-week period. Park staff and the business interns (1) identify the park unit’s mission; (2) conduct an inventory of park assets; (3) analyze park funding trends; (4) identify sources and uses of park funding; (5) analyze park operations and maintenance needs; and (6) develop a strategic business plan to address gaps between funds and park needs. All 12 of the park units we visited have completed a business plan. Many officials—both at the unit level and headquarters—stated that business plans are, among other things, useful in helping them identify future budget needs. Once completed, park managers often issue a press release to announce its completion. Park managers may also send copies to their legislators, local community councils, and park partners (such as cooperating associations) to communicate the results. A Park Service official stated, however, that the Park Service is still refining these business plans to serve as a better tool for justifying funding needs. The COA was developed in 2004 to help park managers evaluate their park unit’s core mission, identify essential park unit activities and associated funding levels, and make fully informed decisions on staffing and funding. The COA is part of a broader Park Service-wide effort to integrate management tools to improve park efficiency. Park Service headquarters, regional officials, and park unit staffs work together in a step-by-step process to conduct the analysis. These steps include preparing a 5-year budget cost projection (BCP) to establish baseline financial information and help project future park needs, defining core elements of the park unit’s mission, identifying park priorities, reviewing and analyzing activities and associated staff resources, and identifying efficiencies. Budget staff for each park unit first complete a 5-year BCP that uses the current year’s funding level for daily operations as a baseline, and estimates future levels, increases in non-personnel costs, and fixed costs such as salaries and benefits. The general target of the analysis is to adjust personal services and fixed costs at or below 80 percent of the unit’s funding levels for daily operations. Three of the twelve park units we visited have completed (or are in the process of completing) a COA, and three will begin the COA in fiscal year 2006. The remaining six park units we visited have yet to be selected. Park unit officials told us that the preliminary results have helped them determine where efficiencies in operations might accrue. A Park Service regional official told us that the core operations process is still in its early development, noting that preliminary results are useful but too early to determine results to be realized by the park units. Park Service headquarters developed the Park Scorecard beginning in fiscal year 2004 to serve as an indicator of each park unit’s fiscal and operational condition, and managerial performance. The scorecard is intended to provide an overarching summary of each park unit’s condition by offering a way to analyze individual park unit needs. It also provides Park Service officials with information needed to understand how park units compare to one another based on broad financial, -organizational, - recreational, -and resource-management criteria. Although the Park Scorecard is still under development, the Park Service’s headquarters budget office used it to validate and approve requests for increases in daily operations allocations for the highest priorities among park units to be funded out of a total of $12.5 million that was provided in 2005 for daily operations directed at visitor service programs. The Park Service approved requests for funding at 3 out of the 12 parks we visited (Badlands National Park, Grand Teton National Park, and Yellowstone National Park). Park Service headquarters officials, with the assistance and input of park unit managers, plan on refining the Park Scorecard to more accurately capture all appropriate park measurements and to identify, evaluate, and support future budget increases for park units. The Park Service also intends for park managers to use the Park Scorecard to facilitate discussions about their needs and priorities. In closing, we have found that overall, from 2001 through 2004, the Park Service increased allocations for support programs and project funding while placing less of an emphasis on funds for daily operations. In fiscal year 2005, this trend shifted, and as evidenced by our visits to 12 park units, appears to be going in the direction needed to help the units overcome some of the difficulties they have recently experienced in meeting operational needs. In responding to these trends, park unit officials found ways to reduce spending on their allocations for daily operations and to identify and use authorized sources other than these allocations to minimize some impacts on park operations and visitor services. While park units are relying more on other sources to perform operations, using such funds has its drawbacks because it usually takes parks longer, with more effort from park employees to obtain and use these sources. Visitor fees have been an important and significant source of funds for park units to address high priority needs such as reducing its maintenance backlog. However, Park Service policy prohibiting the use of visitor fees to pay salaries of permanent employees managing projects may reduce the flexibility in managing the use of funding for daily operations. While the Park Service is embarking upon three management initiatives that they believe will improve park performance and accountability, and better manage within available resources, it is too early to assess the effectiveness of these initiatives. To reduce some of the pressure on funding for daily operations, we recommended that the Secretary of the Interior direct the Director of the Park Service to revise its policy to allow park units to use visitor fee revenue to pay the cost of permanent employees administering projects funded by visitor fees to the extent authorized by law. In commenting on a draft of our report, the department generally agreed with the recommendation, but stated that it should clearly state that visitor fee revenue (and not other sources) be used to fund only a limited number of permanent employees and be specifically defined for the sole purpose of executing projects funded from fee revenue. We believe our recommendation, as written, gives the agency the flexibility sought. The department also said that our report creates a misleading impression concerning the state of park operations in that (1) record high levels of funds are being invested to staff and improve parks, and (2) the report does not examine the results achieved with these inputs. The department also believes that while employment levels at individual park units may have fluctuated for many reasons, employment servicewide, including both seasonal and permanent employees, was stable. We believe however, that our report provides a detailed analysis of the major funding trends affecting Park Service operations, including those at the 12 high-visitation park units we visited, as well as the department’s initiatives and efforts to achieve results. This concludes our statement for the record. For further information on this statement, please contact Robin Nazzaro at (202) 512-3841 or nazzaror@gao.gov. Individuals making contributions to this testimony included Roy Judy, Assistant Director; Thomas Armstrong, Ulana Bihun, Denise Fantone, Doreen Feldman, Tim Guinane, Richard Johnson, Alison O’Neill, and Patrick Sigl.
In recent years, some reports prepared by advocacy groups have raised issues concerning the adequacy of the Park Service's financial resources needed to effectively operate the park units. This statement addresses (1) funding trends for park service operations and visitor fees for fiscal years 2001-2005; (2) specific funding trends for 12 selected high-visitation park units and how, if at all, the funding trends have affected operations; and (3) recent management initiatives the Park Service has undertaken to address fiscal performance and accountability of park units. This statement is based on GAO's March 2006 report, National Park Service: Major Operations Funding Trends and How Selected Park Units Responded to Those Trends for Fiscal Years 2001 through 2005, GAO-06-431 (Washington, D.C.: March 31, 2006). Overall, amounts appropriated to the National Park Service (Park Service) in the Operation of the National Park System account increased from 2001 to 2005. In inflation-adjusted terms, amounts allocated by the Park Service to park units from this appropriation for daily operations declined while project-related allocations increased. Project-related allocations increased primarily in (1) Cyclic Maintenance and Repair and Rehabilitation programs to reflect an emphasis on reducing the estimated $5 billion maintenance backlog and (2) the inventory and monitoring program to protect natural resources through the Natural Resource Challenge initiative. Also, on an average annual basis, visitor fees collected increased about 1 percent--a 2 percent decline when adjusted for inflation. All park units we visited received project-related allocations, but most of the park units experienced declines in inflation-adjusted terms in their allocations for daily operations. Each of the 12 park units reported their daily operations allocations were not sufficient to address increases in operating costs, such as salaries, and new Park Service requirements. In response, officials reported that they either eliminated or reduced some services or relied on other authorized sources to pay operating expenses that have historically been paid with allocations for daily operations. Also, implementing important Park Service policies--without additional allocations--has placed additional demands on the park units and reduced their flexibility. For example, the Park Service has directed its park units to spend most of their visitor fees on deferred maintenance projects. While the Park Service may use visitor fees to pay salaries for permanent staff who administer projects funded with these fees, it has a policy prohibiting such use. To alleviate the pressure on daily operations allocations, we believe it would be appropriate to use visitor fees to pay the salaries of employees working on visitor fee funded projects. Interior believes that, while employment levels at individual park units may have fluctuated for many reasons, employment servicewide was stable, including both seasonal and permanent employees. GAO identified three initiatives--Business Plan, Core Operations Analysis, and Park Scorecard--to address park units' fiscal performance and operational condition. Of the park units with a business plan we visited, officials stated that the plan, among other things, have helped them better identify future budget needs. Due to its early development stage, only a few park units have participated in the Core Operations Analysis; for those we visited who have, officials said that they are better able to determine where operational efficiencies might accrue. Park Service headquarters used the Scorecard to validate and approve increases in funding for daily operations for fiscal year 2005.
According to DOD’s Strategy for Homeland Defense and Civil Support, dated June 2005, without the important contributions of the private sector, DOD cannot effectively execute its core defense missions. Private industry manufacturers provide the majority of equipment, materials, services, and weapons for the U.S. armed forces. The President designated DOD as the sector-specific agency for the DIB. In this role, DOD is responsible for collaborating with all relevant federal departments and agencies, state and local governments, and the private sector; encouraging risk management strategies; and conducting or facilitating vulnerability assessments of the DIB as set forth in HSPD-7. In executing these responsibilities, the Secretary of Defense requires a network of organizations with diverse roles and missions. Key participants in the network include the following: The Undersecretary of Defense for Acquisition, Technology, and Logistics, USD(AT&L), who is responsible for, among other things, integrating DCIP policies into acquisition, procurement, and installation policy guidance and for coordinating with ASD(HD&ASA) to ensure DCIP-related guidance is developed and implemented, and that system providers remediate vulnerabilities identified prior to system fielding or deployment. ASD(HD&ASA), which serves as the principal civilian advisor to the Secretary of Defense on the identification, prioritization, and protection of DOD’s critical infrastructure. ASD(HD&ASA) assigned responsibility for the DCIP, including DIB sector-specific agency responsibilities, to the Director for Critical Infrastructure Protection under the Deputy Assistant Secretary of Defense for Crisis Management and Defense Support to Civil Authorities. The DCIP office provides policy, program oversight, integration, and coordination of activities. DCMA, which is the defense sector lead agent responsible for the coordination and oversight of DCIP matters pertaining to the DIB because of DCMA’s established working relationship with DIB owners/operators. DCMA responsibilities include planning and coordinating with all DOD components and private-sector partners that own or operate elements of the DIB. Private-sector owners, operators, and organizations; and other federal departments and agencies, including DHS, the FBI, and the Departments of Energy, Commerce, the Treasury, and State. It also includes state and local agencies, international organizations, and foreign countries. Under Homeland Security Presidential Directive 7, federal departments and agencies are to identify, prioritize, and coordinate the protection of critical infrastructure and key resources in order to prevent, deter, and mitigate the effects of deliberate efforts to destroy, incapacitate, or exploit the infrastructure and resources; and they are to work with state and local governments and the private sector to accomplish this objective. Sector- specific agencies, among other things, are to encourage risk management strategies to protect against and mitigate the effect of attacks against critical infrastructure and key resources. DOD’s risk management approach is based on assessing threats, vulnerabilities, criticalities, and the ability to respond to incidents. Threat assessments identify and evaluate potential threats on the basis of capabilities, intentions, and past activities. Vulnerability assessments identify potential weaknesses that may be exploited and recommend options to address those weaknesses. Criticality assessments evaluate and prioritize contractors on the basis of their importance to mission success. These assessments help prioritize limited resources and thus, if implemented properly, would reduce the expense of resources on lower- priority contractors. DOD’s risk management approach also includes an assessment of the ability to respond to, and recover from, an incident. ASD(HD&ASA) officials said it provided research and development funding for program development in fiscal years 2005 and 2006 of $550,000 and $675,000, respectively. It did not provide research and development funding to DCMA in 2007 and said it did not intend to provide any during the period of fiscal years 2008 to 2013. They said that for operations and maintenance, DOD funded the program at about $1.1 million and $1.0 million in fiscal years 2004 and 2005, respectively; and $2.5 million and $2.0 million in fiscal years 2006 and 2007, respectively. DOD plans to increase operations and maintenance funding to about $8.3 million in fiscal year 2008, about $9.4 million in 2009, and about $10.1 million in 2010 before decreasing it to about $8.8–$8.7 million in subsequent fiscal years through fiscal year 2013. In January 2007, the Joint Requirements Oversight Council, chaired by the Vice Chairman of the Joint Chiefs of Staff, approved the National Guard Critical Infrastructure Program— Mission Assurance Assessment (CIP-MAA) capability for the DIB. The council agreed that the services will provide funding to meet the requirements for fiscal years 2008–2013, and it endorsed the National Guard as the overall lead agency to implement the CIP-MAA. The operations and maintenance funding is summarized in figure 1. DOD has begun developing and implementing a risk management approach to ensure the availability of DIB assets needed to support mission-essential tasks, though implementation is still at an early stage. The approach comprises two plans. First, the DIB sector assurance plan, issued in May 2005 and updated in May 2007, outlines an approach for identifying vulnerabilities, risks, and effect on business; implementing remediation and mitigation strategies; and managing consequences to ensure continuity of operations. Second, the DIB sector-specific plan, submitted in December 2006, outlines DOD’s approach to executing its sector-specific responsibilities, follows guidance established by DHS, and complements other DOD critical infrastructure policy. It focuses efforts on assets, systems, networks, and functions that, if damaged, would result in unacceptable consequences to the DOD mission, national economic security, public health and safety, or public confidence. The sector assurance plan provides a coordinated strategy for managing risk at DIB critical asset sites located throughout the world and describes a risk management approach and plans for the DIB. It focuses on steps to (1) identify a critical asset list; (2) prioritize the critical assets on that list; (3) perform vulnerability assessments on high-priority critical assets; and (4) encourage contractors’ actions to remediate or mitigate adverse effects found during these assessments, as appropriate, to ensure continuity of business operations. DOD depends on the DIB to accomplish its work in support of military missions. The absence or unavailability of some assets designated as critical DIB assets, and the products and services these assets produce, could cause military mission failure. To identify DIB critical assets, DCMA industrial analysts and other DOD personnel compiled a list of approximately 900 important defense contractor assets, and then narrowed this number by using another set of criteria. DCMA has also developed an asset prioritization model for determining a criticality score and ranking critical assets, from highest to lowest risk. It has established a standardized mission assurance vulnerability assessment process for critical DIB assets, and as of June 1, 2007, had completed and issued reports for eight assessments and had three other assessments in process. ASD(HD&ASA) is developing guidance to provide a standardized process for determining, planning, and implementing remediation actions for DOD personnel involved in remediating risks and supporting overall DOD mission assurance. Table 1 provides a summary of the current number of important and critical DIB assets identified and the number of contractors assessed. DCMA has developed a process to identify the most important DIB assets and to narrow this list to those it considers critical using a tiered approach that enables identification of important capabilities and critical assets from the hundreds of thousands of entities constituting the DIB. The collection of data on each entity within the DIB was considered neither practical nor an effective use of limited resources, so DCMA focused on reducing the magnitude of assets to a manageable number through the use of government DIB subject-matter experts. DCMA has developed a process to identify the most important DIB assets and to narrow this list to those it considers critical. The criteria used for both lists are shown below in table 2. The critical asset list is reviewed, updated, and approved annually. DCMA identifies potential assets meeting the criteria, and the military services and defense agencies then validate and update the list. DCMA reviews and validates the updated list and prioritizes it using the asset priority model. DCMA then coordinates with senior acquisition executives and submits the revised critical asset list for approval to the Deputy Under Secretary of Defense for Industrial Policy, USD(AT&L), and ASD(HD&ASA). DCMA has been developing an asset prioritization model for determining a criticality score and ranking critical assets from highest to lowest risk. This model is to provide a mechanism for DCMA to allocate limited resources to those critical DIB assets assessed to be most vulnerable: the higher the score, the higher the priority of the asset for vulnerability assessment and possible remediation/mitigation actions. The model uses 16 weighted factors that are aggregated to assign a vulnerability score to each asset. These factors are broadly classified into mission (5), economic (4), threat (5), and other (2), as shown below in table 3. Data for the determination of these factors are collected from DCMA surveys and analysis, supplemented by various commercial and government sources, including the Defense Logistics Agency, the military services, and the combatant commands. If there are missing data for a given item, DCMA’s rule is to default to a high-risk score, as this is the most conservative assumption. For threat data currently obtained by DCMA, the model includes an assessment of current, potential, and technologically feasible threats to assets from hostile parties as well as from natural or accidental disasters inherent to the asset or its location. Hostile threat information is collected by the Counter Intelligence Field Activity office from various intelligence sources and then summarized in a threat assessment document for specific sites during the prioritization process, and in a detailed threat assessment prior to conducting an actual National Guard assessment of a site. The Counter Intelligence Field Activity has also established an arrayed threats data system as the DIB sector’s primary method for obtaining threat-related information. DCMA has established a standardized mission assurance vulnerability assessment process for critical DIB assets. As of June 1, 2007, it had completed and issued eight assessment reports. Lessons learned from earlier assessments have been incorporated into training for the assessments scheduled for fiscal year 2007. The current approach for performing assessments has evolved from earlier efforts designed to protect the mission of the asset from a broad spectrum of threats. The approach calls for multidisciplinary teams to conduct performance-based assessments to identify vulnerabilities of critical missions and recommend ways to mitigate those vulnerabilities. DOD found these efforts to be effective, but costly and time consuming. It developed a set of standards to conduct vulnerability assessments, building on other vulnerability assessment methods DOD has used. Working through DCMA and the National Guard Bureau, DOD has established a standardized mission assurance assessment for application to critical DIB assets. These assessments consider effect, vulnerability, and threat/hazard from natural disaster, technological failure, human error, criminal activity, or terrorist attack. To perform assessments, DCMA partners with the Defense Security Service (DSS), the Counter Intelligence Field Activity, the Defense Intelligence Agency (DIA), and appropriate federal, state, and local law enforcement to identify and characterize all hazard threats to key assets, and uses benchmarks and standards to ensure consistency within the DIB and the broader DCIP community. The assessment process typically involves (1) using the critical asset list to select the DIB contractor candidate for assessment; (2) notifying the selected DIB asset to schedule the vulnerability assessment; (3) conducting a preassessment briefing with the contractor; (4) scheduling the assessment; (5) negotiating a memorandum of agreement with the contractor to coordinate the terms of the assessment; (6) performing the assessment, which is designed to assess vulnerability to a broad spectrum of threats; (7) providing an outbriefing; and (8) writing a final vulnerability assessment report. The process for conducting vulnerability assessments on critical DIB contractors is early in implementation and only 8 of the planned 203 have been completed, with reports issued, as of June 1, 2007. DCMA estimated that conducting assessments on all critical DIB assets will take several years. Between fiscal years 2003 and 2006, DOD considered and evaluated different approaches that might be used in conducting on-site vulnerability assessments. For example, five assessments of different types were done by different DOD groups prior to fiscal year 2006. With the benefit of the earlier assessments, DCMA in fiscal year 2006 developed a pilot project that included six vulnerability assessments and used the information gained to develop an approach for conducting on-site vulnerability assessments at all critical DIB asset locations. DCMA had settled on a methodology for outreach to contractors, a standardized approach for conducting on-site vulnerability assessments, and training for National Guard teams to conduct these assessments. DCMA is planning a number of improvements as a result of lessons learned from the six pilot project assessments. For example, DCMA officials said they planned to update the existing benchmarks, develop additional benchmarks for security operations and emergency management, and determine the final report format to use for future assessments. In addition, DCMA officials said that, as a result of the pilot assessments, they plan to change the process on future assessments. For example, rather than a single visit to the contractor to perform the entire assessment, they intend to conduct an advance site visit to identify key officials, gather information, and perform preliminary analyses on manufacturing and infrastructure. They said this will allow more time for up-front analysis and alleviate the workload and reduce the hours needed at the time of the assessment visit. In fiscal year 2007, DCMA planned to have National Guard teams conduct 19 vulnerability assessments and then to increase its pace to complete these vulnerability assessments at a rate of 50 per year. However, it has changed this goal for 2007, and even at the rates planned it would take 6 years, or until 2012, to complete the initial vulnerability assessments on the 203 critical DIB contractors identified in 2006, as shown in table 4. ASD(HD&ASA) has been developing the DOD Remediation Planning Guide for the DCIP remediation process in order to provide a standardized process for determining, planning, and implementing remediation actions for DOD personnel involved in remediating risks and supporting overall DOD mission assurance. The planning guide encompasses: (1) DOD- owned assets that support the National Military Strategy; (2) non-DOD- owned assets that support the National Military Strategy (i.e., government- owned infrastructure, commercial-owned infrastructure, and the defense industrial base); and (3) non-DOD-owned assets that are so vital to the nation that their incapacitation, exploitation, or destruction could have a debilitating effect on the security or economic well-being of the nation or could negatively affect national prestige, morale, and confidence. Because proper remediation lessens the negative effect of an event, it makes sense in many cases to strengthen, through a reduction of risk, those assets critical to DOD missions. When unacceptable levels of risk are identified, an asset owner should seek to remediate them in a prioritized fashion based on their overall risk to DOD. This planning guide identifies and discusses specific actions that are essential to remediation strategy development and implementation. The planning guide calls for an effective plan of action and milestones focusing on a remediation strategy to be developed as soon as feasible following the risk assessment. The planning guide provides the basic steps for an effective plan and suggested time frames: (1) confirm ownership and prioritize risk as soon as possible after completion of assessment; (2) analyze options and determine the best approach within 30 days after a risk assessment is completed; (3) develop the remediation plan as soon as practicable, but not later than 60 days after the risk assessment; (4) implement the remediation plan within 2–4 weeks following remediation plan approval; (5) keep appropriate officials informed at plan commencement and within 2–4 weeks of remediation plan completion; and (6) execute follow-up actions no more than 3 years after risk assessment. The planning guide also includes a chapter focused on DIB remediation. It states that the remediation measures for the DIB focus on facilitating relationships and sharing information to implement the appropriate level of protection. The chapter referring to the DIB is designed to assist asset owners, operators, and DOD managers in determining whether a remediation action is justified and required. The DIB sector remediation process includes a step-by-step approach for analyzing issues and making judgments. It describes a remediation process that will help preserve privately owned DIB critical asset capabilities. ASD(HD&ASA) officials told us it was designed in a general way without suggested time frames because of the voluntary nature of the DIB participation in the DCIP. DOD faces several key challenges in implementing its DIB risk management approach and will need to address them to ensure that its approach is sound and its progress can be measured. First, the critical asset list used by DCMA does not incorporate comprehensive, mission- essential task information from the military services. Second, the prioritization model used by DCMA has not yet undergone external technical review and lacks both contractor-specific data and comprehensive threat information. Third, DCMA is not scheduling and conducting its vulnerability assessments in accordance with the asset rankings in its prioritization model. Fourth, DOD lacks a plan for identifying and addressing challenges in assessing vulnerabilities of critical foreign contractors. DCMA is not currently obtaining comprehensive information from all of the combatant commands and services needed to develop a critical asset list that is linked to DOD’s mission-essential tasks. Both the 2006 DIB critical asset list and the list in development for 2007 do not reflect data from all the combatant commands and services using mission-essential task information. The DOD risk management approach calls for identifying DIB assets critical to supporting combatant commanders’ mission- essential tasks that would result in DOD-wide mission failure if the asset were to be damaged, degraded, or destroyed. According to DCMA and the services, DCMA and the Army and Navy provided most of the data for the 2006 critical asset list, but the Air Force did not provide input for the list. In responding to DCMA’s request for the 2007 critical asset list, the Air Force limited its participation to the review and validation of DIB critical assets identified and compiled by DCMA, which used DCMA’s methodology only. This service has made no independent submission of DIB-like assets to DCMA. DCMA officials told us they were aware of the need to link DIB assets to mission-essential tasks. The DIB sector assurance plan calls for identifying assets critical to supporting combatant commanders’ mission-essential tasks that would result in DOD-wide mission failure if the asset were to be damaged, degraded, or destroyed, and DCMA says it plans to continue to collaborate and strengthen relationships with the combatant commands and other DOD organizations in identifying DIB assets and systems supporting their critical missions. According to OSD officials, the services are still working on identifying the mission-essential tasks and the defense critical assets that support these tasks, including DIB defense critical assets. The method for identifying critical DIB assets has evolved, and refinements are continuing. Thus far, a plan with targets and time frames has not been established for identifying all of the mission-essential tasks for all of the services. The asset prioritization model has not undergone external technical review. Further, some needed contractor-specific data were missing for a number of the critical assets. Additionally, the absence of comprehensive threat data undermines the utility of the index score for prioritizing contractors. Our review of the asset prioritization model revealed that weighting factors were selected and much of the input data were determined according to subjective decisions made with only limited review. According to the DCMA official who developed the model, the subjectivity involved in assigning the precise values of the weights in the model is the most controversial aspect of the model. Cross-disciplinary collaboration and peer review are, in our opinion as well as that of DOD officials with whom we spoke, important means of validating modeling strategies. As of the time of our review, DCMA had not had its model independently reviewed. The model, created in September 2004, has undergone a number of refinements, and more are planned. According to the DCMA staff member who developed the model, he is the only individual who fully understands the model and all submodels and is responsible for assigning factor risk scores to each asset. Future initiatives for refining the model include (1) developing submodels in 2007, (2) addressing issues regarding data absence and data obsolescence in 2008, (3) developing guidance for others on how to use the model (no established target date), and (4) moving from a spreadsheet format to a Web-based application (no established target date). Without independent formal review of its asset prioritization model, DCMA cannot be assured that the model is valid and suitable for its intended purpose. Our review of the model also revealed that contractor-specific data were missing for a number of the critical assets. DCMA collects open-source and in-house statistical data on contractor operations, but it lacks some needed contractor-specific information from the DIB contractors on their operations for use in the model. DCMA has undertaken two surveys to obtain these needed data and is planning a third survey, but these efforts depend on contractors’ willingness to provide business sensitive information and they have thus far not been fully successful. The model does not distinguish between assets marked as high risk by default for lack of data and those for whom data corroborate the high-risk designation. Our review of the asset prioritization model found that DIB contractors with similar entries based on missing data for several factors may not be differentiated one from another; it was not always apparent whether some contractors were identified as high risk because of an unavailability of data or the presence of data that justified the identification. The ability to distinguish between high scores due to risk and high scores due to missing data has important implications for resource allocation, for data collection and assessment, and for risk remediation. Additionally, prioritization of data collection should focus on those items that are most mission-critical and have the highest weight in the model’s scores. DCMA has conducted two surveys, called industrial capabilities assessments, to obtain contractor-specific information on DIB assets, but both of these efforts have met with limited response rates. DCMA officials said this was due at least partly to contractors’ reluctance to provide information. In 2004 DCMA sent a questionnaire to obtain additional information from DIB contractors. DCMA had requested this information using a cover letter to the companies signed by the Assistant Secretary of Defense for Homeland Defense (ASD-HD) and coordinated with DCMA officials in the field. DCMA officials said that these steps were taken to help ensure a greater response to the survey. Nevertheless, of those responding, some of the survey forms were incomplete and some of the data provided were determined to be unreliable. In 2005, DCMA sent a revised questionnaire, but it was not administered with the same level of discipline used in the first one. For example, it did not use DOD on-site personnel to help ensure high response rates, and only 30 percent of those surveyed responded. Again, responses were incomplete and some of the data were not considered reliable. DOD officials said that contractors were more reluctant to provide certain types of data, such as financial, disaster planning, reconstitution, and especially forecast data. DCMA did not conduct a survey in 2006. DCMA is planning another effort in fiscal year 2007 to send out a revised capabilities-assessment questionnaire to DIB contractors. DCMA officials are in the process of revising and expanding on the assessment to be sent to contractors to more specifically address critical infrastructure protection. Once DCMA has finalized the critical asset list for 2007, it is planning to conduct a new industrial capabilities survey. However, it will take several months for DIB critical contractors to receive, fill out, and return the industrial capabilities survey; and DCMA has not identified specific steps to ensure that this survey receives a high response rate with quality information. Our review of DOD’s asset prioritization model also revealed a lack of comprehensive threat information. DOD officials told us that intelligence- gathering agencies currently provide information to DCMA through ad hoc agreements, as opposed to a more formalized arrangement. The collection and analysis of DIB-related intelligence information has evolved over time between such agencies as DSS, Counter Intelligence Field Activity, and DCMA. According to DCMA as well other DOD officials, DCMA does not receive comprehensive threat information from the appropriate intelligence agencies to enable it to accurately prioritize DIB assets. These intelligence agencies include the National Counterterrorism Center, DHS’s Office of Intelligence and Analysis and its Homeland Infrastructure Threat and Risk Analysis Center, the FBI, and others. While DCMA obtains information for prioritization from the Counter Intelligence Field Activity, DCMA does not routinely obtain full threat information from these other intelligence agencies. The absence of comprehensive threat data undermines the utility of the index score for prioritizing contractors. Until DCMA develops and implements procedures for obtaining the threat data needed, it cannot rely on the outputs of its asset prioritization model. DCMA is conducting its vulnerability assessments on critical DIB assets according to contractor accessibility and without regard for those assets’ respective prioritization model rankings. According to DCMA, one purpose of the prioritization model is to rank critical assets and to use this order to prioritize assessments. DCMA should schedule and conduct its vulnerability assessments on the critical DIB assets based upon their respective rankings as validated in the asset prioritization model. Furthermore, DOD has not established targets or time frames for resolving this issue. The assessments to be performed should be identified from a comprehensive critical asset list that has been ranked based on a reliable asset prioritization model. However, DCMA has not used the rankings from its asset prioritization model to schedule outreach visits or on-site vulnerability assessments. According to DCMA officials, a high score on the model should result in DCMA’s contacting the contractor to conduct a vulnerability assessment. However, they said that coordinating on-site assessments is complicated and highly sensitive. DCMA officials say that lack of facility security clearances complicates their efforts to get DIB contractors to participate in DOD’s risk management program because DCMA cannot inform uncleared contractors that they are on the classified critical asset list or discuss with them vulnerabilities found at their facilities. Consequently, officials have devoted outreach efforts, first, to those contractors at facilities having the necessary security clearances, and next, to those that DCMA officials believe would be most amenable to undergoing an assessment. About 52 percent of the DIB facilities identified as critical lack security clearances for the facility or any of its personnel, and thus cannot receive vulnerability assessments or discuss needed remediation actions. DSS officials told us that, though they recognized that many critical contractors did not have facility security clearances, DSS lacks the resources needed to preemptively clear all critical DIB facilities. In further explaining why they have not followed the prioritization ranking in conducting assessments, DCMA officials said that because private- sector DIB contractors’ participation in the program is voluntary, DCMA must rely on the contractors’ willingness to cooperate and provide information. According to DCMA officials, some DIB contractors have had concerns about sharing information that they consider proprietary, and about the possibility of incurring additional costs and liabilities to correct any vulnerabilities identified as part of this program as a result of sharing this information. These concerns regarding sharing information with DOD were echoed by some of the DIB contractors with whom we spoke, for a variety of reasons. For example, when asked about his willingness to share certain information with DOD, one DIB contractor we spoke with said that he was concerned that information that he deemed proprietary or potentially damaging to the company could somehow be released or disclosed, and he was unsure how DOD would protect such information. Furthermore, DOD officials noted that some significant DIB contractors are involved in classified, special access programs that could involve military mission-essential tasks and as a result may not be allowed or willing to share certain types of information. They also noted that there is no similar effort to identify critical DIB assets from the classified special access program perspective. Consequently, some significant critical DIB assets may not currently be included as part of the program. DCMA officials told us that, in order to overcome resistance from those DIB contractors that may be reluctant to share information and participate in the program, they have developed tactics that in some cases have been successful in promoting greater voluntary participation. For example, in at least one case, DCMA requested that a high-level DOD official reach out to the contractor directly and make the informational request. Also, DCMA officials told us that they develop memoranda of agreement with contractors that delineate what the on-site assessment will entail, what the assessment team and the company are agreeing to do, and the manner in which the contractor’s information will be used and protected. DCMA officials told us that while these steps have resulted in progress, they have also been time-consuming and have affected the sequence according to which critical DIB contractors have been scheduled for assessment. The program, and DCMA’s outreach and educational efforts in eliciting contractor information, continue to evolve. For example, the sector- specific plan states that DOD plans to develop an accreditation plan for identifying and certifying Protected Critical Infrastructure Information (PCII) under DHS’s PCII program. The PCII program was established by DHS pursuant to the Critical Infrastructure Information Act of 2002. The act provides that critical infrastructure information that is voluntarily submitted to DHS for use by DHS regarding the security of critical infrastructure and protected systems, analysis, warning, interdependency study, recovery, reconstitution, or other informational purpose, when accompanied by an express statement, shall receive various protections, including exemption from disclosure under the Freedom of Information Act. If such information is validated by DHS as PCII, then the information can only be shared with authorized users. Before accessing and storing PCII, organizations or entities must be accredited and have a PCII officer. Authorized users can request access to PCII on a need-to-know basis, but users outside of DHS do not have the authority to store PCII until their agency is accredited. However, the lack of accreditation does not otherwise prevent entities from sharing information directly with DOD. However, we noted in our April 2006 report that nonfederal entities continued to be reluctant to provide their sensitive information to DHS because they were not certain that their information will be fully protected, used for future legal or regulatory action, or inadvertently released. Since our April report, DHS published on September 1, 2006, its final rule implementing the act, but we have not examined whether nonfederal entities are more willing to provide sensitive information to DHS under the act at this time, or DOD’s cost to apply for, receive, and maintain accreditation. However, one of the DIB contractors we interviewed mentioned generally that while some advances have been made in information protection, such as the establishment of the PCII program, the contractor continues to be concerned that the program has yet to demonstrate that it can provide good security for contractor- provided information, and remains wary about damage from public or competitor disclosure. DCMA officials also pursued new legislation and additional provisions for the Defense Federal Acquisition Regulation in order to, in their view, potentially increase industry participation, but these changes were ultimately not enacted. For example, DCMA officials had drafted a legislative proposal that stated that “critical supplier assessments and company specific assessments developed under the Defense Critical Infrastructure Program, evaluating the security of Defense Critical Suppliers, shall not be disclosed under the Freedom of Information Act.” However, DCMA officials told us that the legislative proposal was ultimately not approved to be included in the DOD legislative proposals that are sent to the Congress for consideration and there are no current plans within DOD to pursue this legislation. In addition, DCMA officials also pursued the addition of clauses to the Defense Federal Acquisition Regulation. The language that was proposed would have included several provisions pertaining to the critical infrastructure of the defense industrial base, such as stating that the contractor shall be responsible for the overall organizational physical protection and security of its own critical infrastructures; have in place a comprehensive security plan relating to overall plant and facility security designed to protect its critical infrastructures; that the government shall be permitted to conduct or facilitate vulnerability and mission assurance assessments under the DCIP. However, these changes were ultimately not submitted to the Defense Acquisition Regulation Council. DCMA has not established a plan to deal with the potential challenges inherent in assessing vulnerabilities of foreign contractors. In order to do so, DCMA needs to coordinate with other agencies, such as the Department of State, to develop strategies to better ensure that foreign contractor vulnerabilities can be identified and addressed. DCMA has not conducted any assessments of foreign contractors. The critical asset list identifies nine foreign contractors. DCMA planned to conduct a pilot assessment on one of these contractors in 2006, but did not do so, according to DCMA officials, because procedures are not yet in place for assessing foreign suppliers of products manufactured overseas. The DIB sector-specific plan recognizes the challenge involved when DIB assets are located in foreign countries, and states that where DIB assets are located in foreign countries many of the plan’s proposed activities could be perceived as U.S. government intrusion into sovereign areas of the host country, particularly with respect to threats and vulnerabilities. The plan also recognizes that DOD and the DIB Sector Coordinating Council must ensure that DIB protection activities are coordinated with U.S. embassies and host governments; that where pertinent treaties exist, activities should conform to them; and that a strategy needs to be developed for an action plan in foreign countries with DIB assets. DOD is in the process of implementing a risk management approach to identify, prioritize, evaluate, and remediate threats, vulnerabilities, and risks to critical DIB assets, including those DIB assets that are critical to achieving DOD’s mission-essential tasks. Several key challenges to the implementation of this program need to be addressed in order for DOD to be able to ensure that its approach is sound. First, in identifying and prioritizing critical DIB assets, DOD is not currently incorporating data reflecting mission-essential task information from all of the services. Second, in order for DOD’s asset prioritization model to be reliable, the model would benefit from appropriate external technical review, and it also lacks selected contractor-specific data that need to be provided by DIB contractors, as well as comprehensive threat information from the appropriate intelligence agencies. Without a comprehensive list of critical assets and a reliable asset prioritization model, DOD cannot ensure that it has identified the most important DIB critical assets, as is necessary for carrying out the National Military Strategy. Third, DOD is currently scheduling and conducting assessments based on contractor amenability and security clearance status, rather than on the rankings assigned to critical DIB assets according to its asset prioritization model. Unless DOD assesses assets based on their rankings determined by a reliable asset prioritization model, DOD will not be in a sound position to know that it is assessing the most critical DIB assets or making the best use of limited resources. Fourth, DOD has not yet developed a plan for identifying and addressing potential challenges in assessing vulnerabilities of critical foreign DIB contractors. As a result, vulnerabilities in these critical foreign contractors can potentially threaten their availability to DOD. Until all of these issues are addressed, DOD will lack the visibility it needs over critical DIB asset vulnerabilities, will be unable to encourage critical DIB contractors to take needed remediation actions, and will be unable to make informed decisions regarding limited resources. To manage the complete development of the risk management approach to better ensure its effectiveness we recommend the Secretary of Defense direct the ASD(HD&ASA) to develop a management framework that includes targets and time frames and undertakes the following steps: Obtain comprehensive data from all the combatant commands and services based on mission-essential task information, and incorporate these data with those set forth in DCMA guidance, to develop a comprehensive list of the critical DIB assets. Improve the reliability of its asset prioritization model by obtaining the appropriate external technical review; developing a detailed plan for improving response rate and data quality from DIB contractors in conducting its next capabilities survey, to ensure that DCMA obtains contractor-specific data needed for establishing priorities; and identifying and developing procedures for obtaining comprehensive threat information from the appropriate intelligence agencies, including DHS, the FBI, and others to use as model inputs to prioritize DIB assets and conduct vulnerability assessments. Schedule and conduct vulnerability assessments on the critical DIB assets based on their respective rankings as validated in the asset prioritization model, to ensure that the most critical DIB assets are assessed in a timely manner and DOD maximizes its use of limited resources. Prepare a plan to collaborate with the Department of State and other agencies, as appropriate, to develop options to identify and address potential challenges in assessing vulnerabilities of critical foreign contractors. In written comments on a draft of this report, DOD partially concurred with all four recommendations. In its response, DOD cited actions it planned to take that are generally responsive to our recommendations. DOD also provided us with technical comments, which we incorporated in the report, as appropriate. DOD’s response is reprinted in appendix II. DOD partially concurred with our recommendation to develop a management framework that includes targets and time frames and to obtain comprehensive data from all the combatant commands and services based on mission-essential task information. DOD stated that DCMA is aware of the need to link DIB assets to mission-essential tasks and that ASD(HD&ASA) has developed a draft DOD instruction to formalize this process. DOD also said that DCMA is incorporating this framework into its process for critical asset identification and that ASD(HD&ASA) is developing a DCIP program plan that will address targets and time frames for achieving these goals. DOD commented that this plan should be completed by the first quarter of fiscal year 2008. DOD partially concurred with our recommendation to improve the reliability of its asset prioritization model by obtaining the appropriate external technical review, needed contractor specific data, and comprehensive threat information from the appropriate intelligence agencies and stated that DCMA had coordinated the review of the asset prioritization model with the DOD Modeling and Simulation Office, the Canadian Department of National Defense, and various DOD activities. However, at the time of our review, DCMA had not yet coordinated the review of the asset prioritization model with these offices, and other feedback on the model was informal and undocumented. We found that the model has had a number of refinements over the years and that there are fundamental processes that have not been reviewed. We believe that DOD is responsive to our recommendation in its comment that DCMA is open to further technical review of the APM and will work with ASD(HD&ASA) to identify credible and capable subject matter experts to support this effort, and we would stress the need to develop targets and time frames for completing these actions. DOD also commented that developing a detailed plan may improve the contractor response rate and data quality; but noted that participation by industry to provide information is voluntary and contractors continue to be concerned with the release of certain types of data, such as financial, disaster planning, reconstitution, and especially forecast data. We agree that contractor participation is voluntary but there are strategies available to DCMA to improve response rates. As noted in our report, DCMA response rates declined when the process lacked a coordinated plan. DOD also stated that a draft DOD Instruction 3020.nn identifies the intelligence agencies that DCMA will work with to obtain threat and hazard information on DIB critical assets. However, we found that the draft instruction only identified the Under Secretary of Defense for Intelligence to secure support from other DOD activities and does not reference securing support from agencies we note in the report such as DHS and the FBI. As noted in DCMA’s May 2007 sector assurance plan, barriers in the area of threat assessment information and sharing information still require management attention. DOD partially concurred with our recommendation to schedule and conduct vulnerability assessments on the critical DIB assets based on their respective rankings as validated in the asset prioritization model, and noted a number of factors that exist that may prevent scheduling assessments in accordance with the model’s numerical ranking. For example, DOD noted if a contractor on the list is reluctant at first or refuses to participate, it should move to the next contractor on the list, while simultaneously negotiating with the first contractor to gain its participation. DOD also noted that the list is dynamic and may change year-to-year. In addition, DOD may accept the vulnerability assessments performed internally by the contractor providing the company meets established requirements and standards. We believe that the approach described by DOD acknowledges the intent of our recommendation to conduct assessments on the basis of those deemed most critical. We recognize that there will be reasons to conduct assessments out of order, and would expect that those decisions will be documented. DOD partially concurred with our recommendation to prepare a plan to collaborate with the Department of State and other agencies, as appropriate, to develop options to identify and address potential challenges in assessing vulnerabilities in foreign critical DIB assets. DOD stated that DCMA efforts to date have focused primarily on Continental United States assets as they constitute 95 percent of the assets on the critical asset list and that the DIB sector specific plan recognizes the challenges involved when DIB assets are located in foreign countries. DOD further stated that DCMA will continue to work with ASD(HD&ASA) in laying out a framework to both address the issue and to work in collaboration with other government agencies, including the Department of State. As agreed with your offices, we are sending copies of this report to the Chairman and Ranking Member of the Senate and House Committees on Appropriations, Senate and House Committees on Armed Services, and other interested congressional parties. We also are sending copies of this report to the Secretary of Defense; the Secretary of Homeland Security; the Director, Office of Management and Budget; and the Chairman of the Joint Chiefs of Staff. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-5431 or by e-mail at dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To conduct our review of the Department of Defense’s (DOD) defense industrial base (DIB) program, we obtained relevant documentation and interviewed officials from the following DOD organizations: Office of the Secretary of Defense (OSD) Under Secretary of Defense for Personnel and Readiness, Information Under Secretary of Defense for Acquisition, Technology, and Logistics, Office of the Deputy Under Secretary of Defense for Industrial Policy; Under Secretary of Defense for Intelligence, Counterintelligence & Security, Physical Security Programs; DOD Counterintelligence Field Activity, Critical Infrastructure Protection Program Management Directorate; Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs (ASD), Critical Infrastructure Protection Office; Assistant Secretary of Defense for Networks and Information Integration, Information Management & Technology Directorate; Joint Staff, Directorate for Operations, Antiterrorism and Homeland Defense Threat Reduction Agency (DTRA), Combat Support Assessments Department of the Army, Asymmetric Warfare Office, Critical Infrastructure Risk Management Branch; Office of the Chief Information Officer; Mission Assurance Division, Naval Surface Warfare Center, Dahlgren Division, Dahlgren, Virginia; Headquarters, U.S. Marine Corps, Security Division, Critical Department of the Air Force, Air, Space and Information Operations, Plans, and Requirements, Homeland Defense Division; Headquarters, Defense Intelligence Agency, Office for Critical Infrastructure Protection & Homeland Security/Defense; Headquarters, Defense Information Systems Agency, Critical Headquarters, U.S. Strategic Command, Mission Assurance Division, Offutt Air Force Base, Nebraska To examine the status of DOD’s efforts to develop and implement a risk management approach, we reviewed Homeland Security Presidential Directive 7, the Homeland Security Act of 2002, and the National Infrastructure Protection Plan as they relate to the DIB sector-specific and sector assurance plans, as well as other studies conducted by GAO, the Congressional Research Service, and the DOD Inspector General concerning risk management and defense critical infrastructure. We discussed with DOD officials the requirements for a risk management plan for the DIB and the status of the approach’s implementation. We also reviewed and discussed information and data on the Defense Contract Management Agency’s (DCMA) efforts to identify, assess, and remediate critical DIB assets. Specifically, we evaluated the basis for the criteria DCMA established and used to identify important and critical DIB assets; the ways in which these criteria were used by each of the services to help identify important and critical DIB assets; and the ways in which foreign contractors were being identified. We evaluated information concerning the development of the asset prioritization model, the factors used to rank order the critical assets, the refinements that have been made and planned as the model matures, and the outcomes produced by applying the model to the fiscal year 2006 critical asset list. We reviewed the standardized mission assurance assessment process for critical DIB assets, the development of standards to be used, the training for teams to conduct assessments, the reports on six pilot vulnerability assessments performed in fiscal years 2006 and 2007, and lessons learned to be incorporated in future assessments. We reviewed the remediation planning guidance DOD is developing for the Defense Critical Infrastructure Program (DCIP) generally, and we compared the overall guidance to that being developed for the DIB. We also met with the National Guard Bureau and one of the state National Guard teams that conducts DIB sector vulnerability assessments. To examine the challenges faced by DOD in developing and implementing its approach, we assessed the extent to which key steps in the planned approach have been implemented. We compared DCIP policies for identifying mission-essential tasks and related defense critical assets with DCMA’s criteria for identifying a critical DIB asset; and we discussed reasons for the differences with OSD, ASD(HD&ASA), DCMA, and the services. We assessed the development and use of DCMA’s asset prioritization model, including discussions with DCMA and OSD about the requirements for models used within DOD to undergo external technical review and to incorporate all the needed data in order to ensure the model’s validity and suitability. We reviewed methods DCMA has used previously to obtain contractor-specific data, as well as methods planned for future efforts, to ensure that DCMA will obtain more complete information. We discussed with DCMA and DOD intelligence agency officials the threats to the DIB and the availability of specific threat information to DCMA. We compared the assessments being conducted with the rankings of the critical DIB contractors in the asset priority model, and we discussed with DCMA officials why they have not followed the rankings and the challenges that they have encountered as they have begun working with private-sector contractors. We reviewed DCMA’s efforts to encourage reluctant private-sector DIB contractors to participate in the program, including potential changes suggested for the Defense Federal Acquisition Regulation that were ultimately not enacted. We also reviewed DCMA’s current efforts to work with DHS to develop an accreditation approach for identifying and certifying Protected Critical Infrastructure Information, and steps taken by DCMA to overcome resistance. We spoke with a non-probability sample of DIB contractor officials generally about their willingness to participate in the program and the reasons for their respective views, and we discussed with DOD officials and these contractor officials the availability of data concerning foreign contractors. Their comments are not generalizable to a larger population. Lastly, we determined the extent to which DCMA has identified metrics with time frames for completing development of the risk-based management process. We conducted our work between August 2006 and June 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Harold Reich, Assistant Director; Aisha Cabrer; Colin Chambers; Lionel Cooper; Kate Lenane; Anna Maria Ortiz; Terry Richardson; Matthew Sakrekoff; and Cheryl Weissman also made key contributions to this report.
The U.S. military relies on the defense industrial base (DIB) to meet requirements to fulfill the National Military Strategy. The potential destruction, incapacitation, or exploitation of critical DIB assets by attack, crime, technological failure, natural disaster, or man-made catastrophe could jeopardize the success of U.S. military operations. GAO was asked to review the Department of Defense's (DOD) Defense Critical Infrastructure Program and has already reported that DOD has not developed a comprehensive management plan for its implementation. This, the second GAO report, has (1) determined the status of DOD's efforts to develop and implement a risk management approach to ensure the availability of DIB assets, and (2) identified challenges DOD faces in its approach to risk management. GAO analyzed plans, guidance, and other documents on identifying, prioritizing, and assessing critical domestic and foreign DIB assets and held discussions with DOD and contractor officials. DOD has begun developing and implementing a risk management approach to ensure the availability of DIB assets needed to support mission-essential tasks, though implementation is still at an early stage. Its sector assurance and sector-specific plans focus on steps to identify a list of critical assets that, if damaged, would result in unacceptable consequences; prioritize those critical assets based on a risk assessment process; perform vulnerability assessments on high-priority critical assets, and encourage contractors' actions to remediate or mitigate adverse effects found during these assessments, as appropriate, to ensure continuity of business. The Defense Contract Management Agency, the executing agency for the DIB, has developed a process to identify the most important DIB assets and to narrow this list to those it considers critical. It has also developed an asset prioritization model for determining a criticality score and ranking critical assets, and it has established a standardized mission assurance vulnerability assessment process for critical DIB assets. DOD faces several key challenges in implementing its DIB risk management approach. Overall, DOD's methodology for identifying critical DIB assets is evolving, and DOD lacks targets and time frames for completing development of key program elements that are needed for its risk management approach. Without them, DOD cannot measure its progress toward ensuring that DIB assets supporting critical DOD missions are properly identified and prioritized. The specific challenges are as follows: First, DOD is not fully incorporating the military services' mission-essential task information (i.e., listings of assets whose damage, degradation, or destruction would result in DOD-wide mission failure) in compiling its critical asset list. Second, GAO's analysis of DOD's prioritization model shows that weighting factors were selected and data determined according to subjective decisions and limited review, and that needed contractor-specific data were lacking, as was comprehensive threat information, thus undermining the utility of the index score for prioritizing contractors. Without these comprehensive data and a reliable asset prioritization model, DOD will not be in a sound position to know that it has identified the most important and critical assets, as called for in the National Military Strategy. Third, with regard to scheduling and conducting assessments of critical DIB assets, DOD is currently doing so based on contractor amenability and security clearance status without regard for assets' priority rankings, and thus cannot ensure that the most critical DIB contractors are assessed. Fourth, DOD lacks a plan for developing options to work with the Department of State and other appropriate agencies to identify and address potential challenges in assessing vulnerabilities in foreign critical DIB assets. Until all these challenges are addressed, DOD will lack the visibility it needs over critical DIB asset vulnerabilities, will be unable to encourage critical DIB contractors to take needed remediation actions, and will be unable to make informed decisions regarding limited resources.
Modernizing financial management systems so they can produce reliable, useful, and timely data needed to efficiently and effectively manage the day-to-day operations of the federal government has been a high priority for Congress for many years. In recognition of this need, and in an effort to improve overall financial management, Congress passed a series of financial and IT management reform legislation dating back to the early 1980s, including the CFO Act and the Federal Financial Management Improvement Act of 1996 (FFMIA). FFMIA, in particular, requires the 24 departments and agencies covered by the CFO Act to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger at the transaction level. FFMIA also requires auditors, as part of the 24 CFO Act agencies’ financial statement audits, to report whether those agencies’ financial management systems substantially comply with these requirements. In addition, the Clinger-Cohen Act of 1996 requires OMB to improve the acquisition, use, and disposal of IT by the federal government and continually assess the experiences of executive agencies in managing IT, among other responsibilities. Following enactment of this law, OMB revised Circular No. A-130, Management of Federal Information Resources, which established policy for the management of federal information resources and designated OMB as responsible for overall leadership and coordination, as well as the development and maintenance of a governmentwide strategic plan for federal information resources management within the executive branch. Despite these efforts, long-standing financial management systems weaknesses continue to present a formidable management challenge in providing accountability to the nation’s taxpayers and agency financial statement auditors continue to report that many agencies’ systems do not substantially comply with FFMIA requirements. In March 2004, OMB launched the FMLOB initiative, in part, to reduce the cost and improve the quality and performance of federal financial management systems by leveraging shared service solutions and implementing other reforms. The stated goals of the FMLOB initiative were to (1) provide timely and accurate data for decision making; (2) facilitate stronger internal controls that ensure integrity in accounting and other stewardship activities; (3) reduce costs by providing a competitive alternative for agencies to acquire or develop, implement, and operate financial management systems through shared service solutions; (4) standardize systems, business processes, and data elements; and (5) provide for seamless data exchange between and among federal agencies by implementing a common language and structure for financial information and system interfaces. According to a December 2005 OMB memorandum, to achieve the FMLOB vision—and enable efforts to achieve its goals—federal agencies must have competitive options available for financial systems. OMB described a shared service solution framework consisting of a limited number of providers that deliver competitive alternatives for agencies investing in financial system modernizations and provide financial management services for multiple organizations. OMB stated that the economies of scale and skill of a provider will allow it to provide federal agencies with a lower-risk, lower- cost, and increased service quality alternative for financial system modernization efforts. According to OMB, when the FMLOB is successful, federal agencies will have the ability to migrate from one solution to a more competitive or better performing alternative that is offered by a limited number of stable and high-performing providers. In May 2006, OMB established a migration policy and issued its Competition Framework for FMLOB Migrations to provide guidance to agencies planning to migrate their financial management systems and services. According to this migration policy, “with limited exception, an agency seeking to upgrade to the next major release of its current core financial management system or modernize to a different core financial management system must either migrate to a or qualified private sector provider, or be designated as an . At a minimum, agencies must consider pursuing hosting and application management shared services. However, agencies may also consider other shared services, such as accounting or transaction processing.” This policy was subsequently incorporated into OMB Circular No. A-127 in January 2009; this circular provides guidance on the use and selection of external providers to ensure that agencies rely on such providers to help manage their systems and no longer develop their own unique systems. As program manager for the FMLOB initiative, FSIO had a significant role in achieving FMLOB goals, including the development of standard business processes, core financial system requirements, and testing and product certification. In March 2010, OMB announced that FSIO was ceasing operations effective March 31, 2010, stating that FSIO had achieved its objectives of developing governmentwide financial management business processes and data elements. As part of its new approach, OMB also announced in March 2010 the creation of the Office of Financial Innovation and Transformation (FIT) within the Department of the Treasury’s Office of Fiscal Service. FIT’s stated mission includes (1) helping set a new course for federal financial management using automated solutions to reduce duplicate work at individual agencies and (2) assisting in ensuring consistency with a long-term financial management systems strategy for the federal government. In June 2010, OMB announced key elements of its new approach, which will focus on (1) implementing smaller project segments that deliver critical functionality sooner, (2) increasing oversight and review of financial system projects, (3) promoting higher impact shared service efforts related to transaction processing, (4) assessing compliance with financial system requirements, and (5) revising the process for certifying financial management software. In an effort to capitalize on new technologies to help address financial management weaknesses and help meet their financial management needs, about half of the CFO Act agencies are in the process of or have plans to modernize their core financial systems, which often involve large- scale, multiyear financial system implementation efforts. According to the results of our survey, 12 of 23 civilian CFO Act agencies have migrated, or plan to migrate, certain services supporting 16 current systems to 12 external providers in connection with their modernization efforts. Because of the number of separate external service providers involved, the progress toward a shared service framework among the CFO Act agencies has been limited. Over the years, federal agencies have struggled to develop and implement numerous core financial systems to help meet their financial information needs for managing and overseeing their day-to-day operations and programs. As shown in table 1, the civilian agencies, representing 23 of the 24 CFO Act agencies, identified 45 fully deployed core financial systems in use as of September 30, 2009, in response to our survey of the 24 CFO Act agencies. While some of these agencies have recently completed efforts to deploy modernized systems, 17 agencies continue to use 25 aging legacy systems to help meet their needs, including 8 core financial systems placed into operation prior to 1990. Additional information on the 45 current civilian CFO Act agency core financial systems can be found in table 5 of appendix II. Recognizing the importance of effective core financial systems in meeting their financial information needs and efforts to address financial management weaknesses, many agencies are modernizing these current core financial systems. In this regard, 14 of the 23 civilian CFO Act agencies identified 14 systems they plan to fully deploy after fiscal year 2009, which will replace 27 of the current legacy systems. However, agencies provided this information prior to the issuance of OMB’s June 2010 guidance concerning oversight and review of financial system projects, and some of these 14 planned systems may no longer be viable projects under that guidance. Additional information on the 14 planned civilian CFO Act agency core financial systems can be found in table 6 of appendix II. In addition to the 23 civilian CFO Act agencies that responded to the survey, the Department of Defense (DOD) identified one current system, even though it responded that it has more than 100 core financial systems. DOD also identified 6 enterprise resource planning (ERP) systems it plans to deploy from 2011 through 2017. For example, DOD’s General Fund Enterprise Business System is an ERP system that is expected to eliminate 87 current systems and to be used by approximately 79,000 users once it is fully deployed in January 2012. Detailed information that DOD reported on its current and planned systems is included in tables 5 and 6 in appendix II. Because of the scope and complexity of agency modernization efforts, especially those involving highly integrated ERP systems, these large-scale projects often involve system implementations extending over several years before their intended benefits can be realized. For example, in 1999, the Army initiated its Logistics Modernization Program (LMP) in order to better manage its inventory and repair operations at various depots. Although the Army anticipates completing its 12-year multiphased deployment in fiscal year 2011, this project reflects the substantial challenges in large-scale deployments, such as a lack of a comprehensive set of metrics with which to measure the success of implementation. Similarly, the Department of Justice (DOJ) is involved in a multiyear modernization effort to replace six core financial systems and multiple procurement systems operating across the agency with a new integrated core financial system (referred to as the Unified Financial Management System, or UFMS). DOJ expects to complete its efforts to deploy UFMS in 2013, 10 years after the initial alternatives analysis related to this project was completed. Additional information concerning core financial system modernization efforts at DOJ and other selected case study agencies can be found in appendix III. Although OMB’s previous FMLOB guidance focused on migrating support services in connection with new or upgraded agency systems rather than previously deployed systems, 12 of the 23 civilian CFO Act agencies reported that they had already migrated, or plan to migrate, IT hosting or application management services supporting 16 of the 45 current systems that had already been fully deployed as of September 30, 2009. Further, these agencies plan to rely on eight different commercial providers and four federal SSPs to provide services for current systems. Of the 32 expected systems noted in table 1, there are 14 agencies relying on or expecting to rely on 11 providers—4 federal SSPs and 7 commercial providers—to support 17 core financial systems. Table 2 summarizes civilian agencies’ use of external providers—either federal SSPs or commercial providers—for hosting or application management of the 45 current, 14 planned, and 32 expected core financial systems. Overall, 14 of the 23 civilian CFO Act agencies are planning to complete their efforts to deploy 14 planned systems at various times through fiscal year 2018. Ten of these 14 agencies reported that they migrated, or plan to migrate, IT hosting and application management services supporting 10 of the 14 core financial systems they plan to fully deploy after September 30, 2009. In connection with these migrations, 5 of the 10 agencies plan to rely on five different commercial providers, while 2 of the 10 rely, or plan to rely, on the same federal SSP to provide these services, and 3 of the 10 have not determined who the provider will be. In addition, DOD is planning to use two commercial providers for 2 of its 6 planned systems. Table 6 in appendix II includes additional information concerning the migration of selected support services for the 14 planned civilian agency core financial systems and 6 planned DOD systems. In addition to IT hosting and application management support services, eight CFO Act agencies reported that they have migrated, or plan to migrate, transaction processing services to external providers. Specifically, DOD, the Department of Homeland Security, the Department of Labor, and the Nuclear Regulatory Commission (NRC) (as shown in table 6 of app. II) reported that they plan to rely on external providers to provide transaction processing support services for their planned systems while the Department of Transportation, the Department of the Treasury, the General Services Administration, and NRC (as shown in table 5 of app. II), reported that they already rely on external providers for these services for their existing systems. Rather than migrating these services, some large agencies are consolidating their transaction processing activities in-house at the agency level or integrating internal accounting operations through their own internal agency shared solution (e.g., the Department of Agriculture and DOJ, as described in app. III). In June 2010, OMB stated that its attempts to mandate the use of shared services under its previous policy—for hosting and application management—yielded inconsistent results as medium and large agencies encountered the same types of costs and risks with an external provider as they did when modernizing in-house. In contrast, smaller agencies are more frequently relying on external providers to provide core financial system support services to leverage the benefits of using external providers, as discussed in more detail later in this report. Specifically, according to officials at the four federal SSPs, 90 non-CFO Act agencies rely on the support services these providers offer. Federal SSP officials also stated that smaller agencies more frequently rely on the transaction processing support services they provide. For example, according to an official from one federal SSP, it provides transaction processing services to all of its 45 non-CFO Act client agencies. See appendix IV for information on the number of clients serviced by federal SSPs. Agencies and external providers reported that migrating support services to external providers offers advantages for helping smaller agencies, in particular, to capitalize on the benefits associated with sharing the services and solutions available through external providers. However, while federal agencies and external providers have made varied progress toward implementing the FMLOB initiative, they continue to face significant challenges affecting their efforts to modernize core financial systems and migrate selected services supporting them. OMB officials acknowledged that efforts to capitalize on shared services at large agencies have achieved limited success and, in a March 2010 memorandum, announced a need to develop a new approach for financial systems in the federal government. The benefits and challenges experienced through agency and provider efforts to implement the FMLOB initiative offer important lessons learned that if considered could assist OMB in developing its new approach. Modernization and migration efforts highlighted a number of lessons learned regarding potential benefits and challenges of agency migrations to external providers. The potential benefits and challenges summarized in this section were identified by the 24 CFO Act agencies, smaller, non-CFO Act agencies, and external providers through survey results, interviews, and agency case studies. We also identified challenges with agency migrations related to OMB’s guidance on competition. See appendix V for more details on key benefits and challenges reported related to agency migration and modernization efforts. As shown in table 3, external providers’ experienced staff, the potential for cost savings through shared services, increased economies of scale, and the ability to focus on mission-related responsibilities were cited in the survey responses of CFO Act agencies as some of the benefits and advantages of migrating core financial system support services to external providers. For example, Treasury cited potential cost savings and benefits associated with using an external provider such as resource sharing, provider expertise in solving application problems, and using cloud computing concepts. In May 2010, we also reported potential benefits associated with cloud computing, such as economies of scale and the faster deployment of patches to address security vulnerabilities. According to external provider officials, smaller agencies rely more frequently on external providers for transaction processing than CFO Act agencies and benefit from providers’ use of shared instances of software applications and standard interfaces across multiple clients, and their ability to more efficiently process complex transactions. To help realize these benefits, CFO Act agencies also identified a variety of key factors that contribute to successful migrations. Many of the factors cited involve the effective use of disciplined processes, such as clearly defining requirements and performing gap analyses to ensure that agency needs will be met, performing appropriate testing and data conversion procedures, minimizing customizations of software, and reengineering business processes to facilitate greater standardization. In addition, agencies cited the need for (1) appropriate and adequate resources to lead, plan, manage, execute, and oversee modernization and migration activities; (2) clearly defined expected outcomes and responsibilities of key stakeholders; and (3) effective service-level agreements and other mechanisms that could help ensure that the intended benefits of migrating are achieved. CFO Act agencies also cited various concerns about migrating to external providers, such as the ability of external providers to provide solutions that meet the complex and unique needs associated with large agency migrations. As shown in table 4, CFO Act agencies expressed concerns about the general loss of control, flexibility, and subject matter expertise and various risks they will experience if IT hosting, application management, and transaction processing are migrated and whether providers had the capacity to meet the extensive needs associated with large CFO Act agencies. External providers acknowledged these concerns, but cited additional challenges affecting their migration-related efforts, such as agencies’ resistance to adopting common processes used by providers and the lack of a clear mechanism for ensuring that agency migrations occur as intended. We found similar migration challenges related to OMB’s guidance on competition affecting agency and external provider migration efforts, its implementation, and effective oversight. For example, we found that agencies were not always required to migrate to an external provider and did not always conduct a competition for IT hosting and application management because they had already decided to use existing in-house resources to meet their needs (e.g., DOJ, which is discussed in more detail in app. III). On the other hand, we found that those agencies migrating to external providers were not using a limited number of external providers, raising significant questions regarding the extent to which the services they are to provide will be shared with other agencies and any related potential cost savings will be realized. Specifically, as previously discussed, based on survey responses, 14 CFO Act agencies were relying, or planning to rely, on a total of 11 different external providers to support 17 expected systems and providers for 4 of the 17 systems were still to be determined. Unlike similar efforts to implement other OMB electronic government (E- gov) initiatives, the FMLOB guidance does not provide a mechanism for determining the appropriate number of providers needed or describe a governance structure to help ensure that agencies migrate to one of the specific providers identified. For example, prior policies for the human resource line of business (HRLOB) and E-Payroll initiatives both involved the migration of agency-performed functions common across federal agencies to specifically designated shared service centers. Further, in connection with the E-Payroll initiative, established in June 2002, four providers were selected to furnish payroll services for the executive branch. In its latest annual report to Congress on E-gov benefits, OMB reported that migrations of payroll functions performed by other agencies to these providers had been completed. OMB officials acknowledged that efforts to modernize financial management systems under its FMLOB initiative have achieved limited success and that a new approach is needed. Detailed information on OMB’s new approach is not yet available because of its early stage of implementation. However, we have summarized the key elements of its new approach and identified related issues, generally based on lessons learned from prior migration and modernization efforts, for OMB to consider as it moves forward with its implementation. To address ongoing challenges with financial management practices, OMB announced a new financial systems modernization approach, which encompasses the following five key areas. Shared services for transaction processing. In March 2010, OMB and Treasury announced the creation of FIT, within Treasury, effective on April 5, 2010. FIT is expected to coordinate with the CFO Council to identify and facilitate the acquisition or development of initial operating capabilities for automated solutions for transaction processing. Initially, FIT’s efforts will focus on developing operating capabilities for vendor invoicing and intergovernmental transactions. According to OMB, based on the success of interested agencies’ efforts to pilot initial capabilities of new solutions, they will be phased in across the federal government as other agencies request to adopt them. OMB stated that its previous policy captured under the FMLOB initiative—requiring agencies to either serve as SSPs or leverage their services—will no longer be mandated in all cases, but supports such arrangements when they are cost effective. Segmented approach for deploying systems. OMB’s new approach for agencies seeking to deploy a financial system includes limiting the overall length of development projects to 24 months and splitting them into segments of 120 days or less, in part to help simplify planning, development, project management, and other tasks and prioritize the most critical financial functions. Oversight and review of financial system projects. According to the June 2010 memorandum, agencies should identify upfront a series of milestones, warning flags, and stop points over the course of the segment life cycle that if deemed necessary, could result in the project being suspended and returned to planning. In addition, mechanisms for review of project status by senior management should be incorporated into project plans. In this regard, the memorandum directed CFO Act agencies to immediately halt activities, as of the date of the memorandum, on financial system modernization projects over a specified dollar threshold pending OMB review and approval of revised agency project plans reflecting this guidance. The guidance also stated that OMB will review project status on a quarterly basis through fiscal year 2012 and that project segment milestones must be met in order to release funding for additional segments. In addition, OMB announced the establishment of the Financial Systems Advisory Board under the CFO Council, which will make recommendations to OMB on selected projects being reviewed in accordance with the memorandum. Compliance with financial system requirements. OMB stated in its June 2010 memorandum that current core financial system requirements remain in effect and federal agencies have an ongoing responsibility to comply with them. OMB is also initiating a performance-based approach for compliance with financial system requirements that it believes will reduce the cost, risk, and complexity of financial system modernizations. OMB plans to issue a revision to OMB Circular No. A-127, Financial Management Systems, which will update existing requirements and include new guidance on how agencies and auditors will assess compliance with these requirements. Process for certifying financial management software. In March 2010, OMB announced the discontinuation of FSIO’s core financial system software testing and certification function and announced that FSIO operations would cease effective March 31, 2010. OMB’s June 2010 memorandum states that OMB is reforming the software testing and certification program by shifting the accountability of software performance to vendors through self-certification. Under this approach, agencies will hold vendors accountable in the same manner in which other contractual obligations are enforced and will be able to hold contractors specifically accountable for false certifications. OMB also plans to provide additional details related to testing process changes in its revision to OMB Circular No. A-127 and revisit this policy on an annual basis. OMB’s decision to embark on this new approach raises several key issues that have far-reaching implications for the government, software vendors, and external providers. Recognizing that the new approach is in an early stage of implementation, the steps taken so far do not fully describe a strategy that will address these issues moving forward, nor do they yet fully take into account lessons learned associated with previous governmentwide modernization efforts, including, in particular, the FMLOB migration activities discussed earlier in this report. Without sufficient detail on how these issues are to be addressed, uncertainties exist concerning the potential effectiveness of OMB’s new approach. The following describes key issues related to each of the five areas of OMB’s new approach. How will the new approach be implemented and what governance structure will be established to fully realize the benefits of common solutions and new technologies? How will new governmentwide shared solutions that are intended to perform functions currently performed at agencies work with current core financial systems and solutions? What guidance will be provided to agencies to encourage their participation in, and adoption of, the new solutions envisioned in the new approach? Previous efforts to realize the benefits associated with shared services have been challenging, in part because of the lack of a governance structure that ensures agency adoption of shared service solutions. Agency participation in the new solutions being developed by FIT is voluntary and OMB’s previous policy regarding migrations to external providers is no longer mandated. Therefore, the potential benefits that will actually be realized through shared services are uncertain. According to the Institute of Electrical and Electronic Engineers, a concept of operations is normally one of the first documents produced during a disciplined development effort. OMB officials stated that they are developing an overall concept of operations but did not provide us an estimated timeframe for its completion. We previously reported on the need for this critical tool to provide an overall road map for describing the interrelationships among financial management systems and how information is to flow from and through them and within and across agencies, and ensuring the validity of each agency’s implementation approach. In addition, a concept of operations can be used to communicate overall quantitative and qualitative system characteristics to users, developers, and other organizational elements and would allow stakeholders to understand the user organizations, missions, and organizational objectives from an integrated systems point of view. We recognize that OMB’s new approach is in an early implementation stage and guidance is still being developed. However, implementing this approach without certain policy guidance carries risk. For instance, without a concept of operations that provides an overall road map to guide implementation efforts, it is unclear how the new governmentwide solutions envisioned under the new approach will integrate with current or planned core financial systems, as well as how they will impact numerous smaller agencies that have already migrated to federal SSPs. In addition, the governance structure for implementing OMB’s new approach will involve efforts expected to be performed by FIT. OMB has described certain activities FIT is expected to perform, but additional information concerning its purpose, its authority, and the resources to be devoted to its efforts remain unclear. For example, although OMB stated that FIT will assist in ensuring consistency with a long-term financial management systems strategy for the federal government, the specific role that FIT will play in developing or implementing a strategy or overseeing efforts to achieve its goals has not yet been defined. What actions will be taken to help ensure that agencies’ efforts to reduce the scope of modernization projects so that they can be completed within 24 months do not inappropriately emphasize schedule-driven priorities at the expense of achieving event-driven objectives? What guidance will be provided to ensure that agencies have developed an overall, high-level system architecture that clearly defines specific development projects that provide interim functionality? Although efforts to reduce the scope of agency modernization projects so that they can be completed within 24 months may result in more manageable projects, we have previously reported on the importance of capturing metrics that identify events and trends to assess whether systems will provide needed functionality rather than schedule-driven approaches that may lead to rework instead of making real progress on a project. The process for ensuring that future modernization projects conducted under the new approach will align with governmentwide and agency goals, achieve measurable results, and minimize future work- arounds and rework has not yet been clearly described. The Clinger-Cohen Act highlights the need for sound, integrated agency IT architectures and lays out specific aspects of a process agency chief information officers are to implement in order to maximize the value of agencies’ IT investments. For example, consistent with OMB’s new approach, the act also advocates the use of a modular acquisition strategy for a major IT system. Under this type of strategy, an agency’s need for a system is satisfied in successive acquisitions of interoperable increments. However, the act also states that each increment should comply with common or commercially accepted standards applicable to IT so that the increments are compatible with other increments of IT that make up the system. Some agency financial system modernization projects involve the implementation of large, integrated ERP systems—which may be designed to perform a variety of business-related tasks, such as accounts payable, general ledger accounting, and supply chain management across multiple organizational units—to help achieve agency strategic goals. Given the tightly integrated nature of these systems, the extent to which implementation projects can be modified and segmented to achieve OMB’s objective for delivering interim functionality to help agencies address critical needs has not yet been determined. What specific criteria will be used to evaluate agency modernization project plans and task orders requiring OMB review and approval? What steps will be taken to ensure that appropriate procedures and resources are in place at the agency level to avoid an improper impoundment of funds? How will the roles and responsibilities of OMB, the Financial Systems Advisory Board, or others involved in conducting the reviews and their efforts be defined and measured? Our prior work has linked financial management system failures, in part, to agencies not effectively incorporating disciplined processes shown to reduce software development and acquisition risks into their implementation projects. We support the principle of increased oversight and review of projects as called for in our prior recommendations. However, the criteria for performing quarterly assessments of agency modernization projects do not clearly define how such assessments will evaluate the extent to which agencies are embracing disciplined processes. Further, OMB’s template for capturing information on agency projects identifies numerous aspects to be reviewed; however, agencies are not required to provide information needed to assess the effectiveness of testing and data conversion efforts necessary to ensure that substantial defects are detected prior to implementation and that modifications of existing data enable them to operate in a different environment. These and other disciplined processes are critical for successfully implementing a new system. Effective oversight to ensure that they are incorporated into agency and governmentwide system implementation projects will also continue to be a critical factor in the success of future modernization efforts envisioned under OMB’s new approach. In addition, OMB’s direction and CFO Act agencies’ implementation of the direction to immediately halt activities on financial system IT projects pending the outcome of OMB’s review present additional risks concerning adherence to procedures to be followed for impoundments of budget authority, as prescribed in the Impoundment Control Act of 1974. Not all delays in obligating funds are impoundments, but where OMB has given direction to agencies to halt the issuance of new task orders or new procurements, we are concerned that agencies may misinterpret that as a direction to withhold budget authority from obligation either during the review process or upon the decision to terminate an investment. OMB issues general guidance in OMB Circular No. A-11 on the applicable procedures for compliance with the Impoundment Control Act. However, in 2006, we reported to Congress and OMB that executive agencies had improperly impounded budget authority following the President’s submission of proposals to Congress to rescind certain budget authority because, in part, agencies were not fully aware of the nature of the proposals and their intended effect on currently available budget authority. OMB officials stated that none of the 24 CFO Act agencies identified an impoundment resulting from OMB’s direction, but OMB had not evaluated the potential impact of the direction on the agencies’ budget authority nor had it issued any clarifying guidance to the agencies to alert them to the potential for impoundments that might arise if agencies withheld budget authority by not awarding contracts as directed. Moreover, OMB’s reliance on the Financial Systems Advisory Board to assist in the review of agency modernization projects will depend, in part, on the availability of sufficient resources to perform effective reviews and having clear criteria for selecting projects and performing the reviews. Having clear, measurable criteria for determining which projects are to be assessed and that provides for objective assessments would help ensure that they are performed completely and consistently for all projects and that oversight efforts help achieve intended results. The extent to which CFOs and chief information officers from major agencies or other experts will be available and used to perform such reviews, including whether such officials may be involved in reviewing projects related to their own agencies, has not been specified. While OMB officials told us that they plan to take steps to exclude officials from reviewing systems at their own agencies, the process for doing so has not been disclosed. How will system requirements and standard business processes be updated and maintained? What criteria will be used to determine whether a performance-based approach for compliance with financial system requirements reduces the cost, risk, and complexity of financial system modernizations? What actions will be taken to help ensure that discontinuing FSIO’s software testing and certification program does not result in lack of interoperability across agency systems? What steps will be taken to ensure that vendor self-certifications comply with applicable provisions of the Federal Acquisition Regulation? What guidance will be provided to agencies to clarify any changes in agency responsibilities for testing and validating software functionality? FSIO played a significant role in helping to identify and document federal financial management system requirements and the standard business processes on which they should be based. Such efforts were aimed at preventing duplicative research and compilation across government. Prior to ceasing operations effective March 31, 2010, FSIO was working to finalize an exposure draft and issue an updated version of core financial system requirements intended, in part, to reflect changes necessary to align them with current standard business processes. OMB’s June 2010 memorandum states that it plans to issue a revision to OMB Circular No. A-127 to update existing requirements and to provide guidance for agencies and auditors on how to assess compliance. The extent to which these changes will affect modernization efforts as well as improve the ability of financial systems to help address long-standing weaknesses is undetermined. While OMB’s plan to require vendors to self-certify software functionality is intended to shift accountability for software performance to vendors, it does not change vendor accountability for delivering products that meet specified standards. It also does not eliminate the need to develop and update those standards as new requirements are established to facilitate future improvements. Our work on financial management systems modernizations and industry standards has identified the importance of clearly defining systems requirements and managing those requirements throughout system implementations, and failure to do so can have a significant negative impact on their success. OMB plans to provide additional guidance related to the change in the testing process in an upcoming revision to OMB Circular No. A-127 and revisit the policy on an annual basis. However, it is not clear if OMB will be defining system standards and keeping those definitions up to date going forward or if these tasks will be delegated to another entity. The Government Performance and Results Act of 1993 (Results Act) highlights the importance of strategic plans and performance plans as a means for assisting agencies to achieve desired results. We previously reported that strategies should be specific enough to enable an assessment of whether they would help achieve the goals of the strategic plan. We also reported on how collaborative efforts involving multiple agencies to address crosscutting issues—such as federal financial management modernization efforts—could benefit from a governmentwide strategic plan that identifies long-term goals and the strategies needed to address them, aligned performance goals, and improved performance information that assists decision making to improve results. Recognizing OMB’s critical role in governmentwide efforts, such as those envisioned under this new approach, the Clinger-Cohen Act, and OMB’s implementing guidance, OMB Circular No. A-130, specifically require OMB to develop a strategic plan for managing information resources. Further, incorporating performance plans, goals, and other key elements that facilitate performance measurement and monitoring is essential for ensuring that efforts are appropriately aligned to achieve desired results. It will be essential that performance plans are expressed in an objective, quantifiable, and measurable form that clearly links strategic goals with the strategies to be used to achieve them. OMB’s FMLOB initiative represented an important effort intended to reduce costs and improve the quality and performance of federal financial management systems that agencies depend on to generate reliable, useful, and timely information needed for decision-making purposes. In connection with their efforts to implement this initiative and modernize their systems, many agencies took steps to migrate selected core financial system support services to external providers. The use of external providers by smaller agencies in particular highlights potential benefits to be realized through these efforts, such as adopting common processes and sharing software. Other agencies continue to rely on aging legacy systems—even though they may have migrated to an external provider. Agencies continue to be challenged with reengineering business processes and effectively incorporating disciplined processes into their implementation efforts to help ensure their success. OMB announced a new strategy and plans for future financial management system modernization efforts, and began issuing a series of guidance on its new approach from March 2010 to June 2010. However, it is too early to determine the extent to which this new approach will address the cost, risk, and complexity of financial system modernizations. The experience and challenges related to efforts to implement the FMLOB initiative provide important lessons learned as OMB continues to develop and implement its new approach. OMB has stated that it plans to develop additional guidance, such as a governmentwide concept of operations, a long-term financial management systems strategy, and a revised OMB Circular No. A-127. Critical next steps will include OMB elaborating on its new approach to address key issues. The following includes our observations on these issues. As we have previously reported in connection with the FMLOB initiative, a concept of operations is one of the first and foremost critical building blocks and is needed to provide an overall road map to guide implementation of OMB’s new approach in accordance with best practices. Until a well-defined concept of operations is developed, questions remain on how the proposed governmentwide solutions can be integrated with current and planned agency financial management systems. Articulating key aspects of a strategic plan, such as goals and performance plans clearly linked to strategies for achieving them and expressed in an objective, quantifiable, and measurable form, is also critical for the success of OMB’s new approach. In addition, a governance structure that provides clear roles and responsibilities of key stakeholders, such as the Financial Systems Advisory Board and FIT, is necessary to help ensure that the stated goals are achieved. Further, detailed guidance and criteria will be important for understanding how ongoing and future modernization projects will be evaluated. In developing its strategy, it is also important for OMB to clarify the need to mitigate the risks involved with the new requirements for agencies to revise project plans to shorter increments. These risks include agencies adopting a schedule-driven approach rather than focusing on achieving event-driven results consistent with agency needs. In addition, providing guidance to agencies on incorporating relevant OMB Circular No. A-11 procedures would help to ensure that OMB efforts to review financial system IT projects under its new approach do not result in improper impoundments. As part of OMB’s revisions to Circular No. A-127, several clarifications would help provide agencies with direction to implement OMB’s new approach, including (1) the requirements for using an SSP, (2) the new process for developing and updating federal financial management system requirements and standard business processes, and (3) the performance- based approach for determining FFMIA compliance. We recognize that OMB is still in the process of fully implementing this new approach and completing related guidance. However, addressing these and other identified key issues and overcoming the historical tendency for agencies to view their needs as unique and resist standardization will depend on prompt and decisive action to develop an effective governmentwide modernization strategy and related guidance. We are not making any new recommendations in this report because of the early implementation stage of OMB’s new approach; however, we will continue to work with OMB to help ensure that it provides agency management and other stakeholders with the guidance needed to bring about meaningful improvements in financial management systems. Finally, to ensure that taxpayers’ dollars are used effectively and efficiently, continued congressional oversight will be crucial for transforming federal financial management systems to better meet federal government needs. We requested comments on a draft of this report from the Acting Director of OMB or his designee. On August 31, 2010, the OMB Controller provided oral comments on the draft report, including technical comments, which we incorporated as appropriate. Overall, the Controller was concerned that it was too early for GAO to draw conclusions on the change in policy that was published in OMB Memorandum M-10-26 issued on June 28, 2010, and that the report needed to better reflect the new approach as being a work in progress in the beginning stages of implementation. To help address OMB’s concern, we included additional references to the early implementation stage of OMB’s new approach. The Controller also stated that the questions raised in the report were good for framing the issues, and that some of them were in the process of being addressed. For example, he stated that the planned revisions to OMB Circular No. A-127 will address issues raised on systems requirements and the process for certifying software. We have updated the report accordingly. The Controller also stated that the members of the new Financial Systems Advisory Board adopted a charter dated August 1, 2010, which provides additional detail and specificity on the role and responsibilities of the Board members. We were provided the charter on September 2, 2010, and will evaluate it as part of our future work. We continue to believe that the questions and issues raised in the report need to be addressed by OMB in order to reduce risks and help ensure successful outcomes as it moves forward with its new approach and develops additional guidance. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Ranking Member, Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member, Subcommittee on Government Management, Organization, and Procurement, House Committee on Oversight and Government Reform; and the Acting Director of OMB. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Kay Daly, Director, Financial Management and Assurance, who may be reached at (202) 512-9095 or dalykl@gao.gov, or Naba Barkakati, Chief Technologist, Applied Research and Methods, who may be reached at (202) 512-2700 or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To address our objectives, we surveyed chief financial officers (CFO), or their designees, at the 24 CFO Act agencies. We asked each agency to identify the core financial systems that were fully deployed in the agency as of September 30, 2009, and any that the agency planned to fully deploy after that date. Through the use of e-mailed, self-administered questionnaires, we collected descriptive information on modernization and migration-related activities about each core financial system, as well as overall agency activities and perspectives regarding financial management line of business (FMLOB) migration efforts. We designed and tested these questionnaires in consultation with subject matter experts at GAO and the Financial Systems Integration Office (FSIO), GAO survey research methodologists, and selected agency officials. Data collection took place from November 2009 to April 2010. All 24 agencies responded to the survey request and returned questionnaires on 46 currently deployed systems and 20 planned systems that they had identified, as shown in appendix II, tables 5 and 6, respectively. While all agencies returned questionnaires, and therefore our data are not subject to sampling or overall questionnaire nonresponse error, the practical difficulties of conducting any survey may introduce other errors into our findings. In addition to questionnaire design activities discussed above, to minimize errors of measurement, question-specific nonresponses, and data processing errors, GAO analysts (1) pretested draft questionnaires with two agency officials prior to conducting the survey, (2) contacted respondents to follow up on answers that were missing or required clarification, and (3) answered respondent questions to resolve difficulties they had answering our questions during the survey. In addition, we tested the accuracy of selected responses provided by three agencies by comparing them to data we obtained from case studies. To obtain more detailed information on steps taken to modernize core financial systems and migrate related support services to external providers, we performed case studies at the Department of Justice (DOJ), Department of Agriculture (USDA), Federal Communications Commission (FCC), and Office of Personnel Management (OPM). These agencies were selected to provide a variety of perspectives from agencies actively involved in core financial system modernization efforts. Specifically, the criteria used to select agencies for the case studies included (1) different software solutions, (2) a mix of large and small agencies, and (3) differing experiences concerning the use of external providers to support their core financial systems. To identify the use of different software solutions and differing experiences concerning the use of external providers, we reviewed an inventory of CFO Act agency and non-CFO Act agency core financial systems published by FSIO as of December 2008 that identified agencies’ software, versions, and providers, where applicable, that hosts the systems, as well as selected 2008 agency performance and accountability reports. To provide a mix of large and small agencies, we selected at least one agency from each of three strata defined by gross costs as reported in the 2008 Financial Report of the United States Government. To help ensure an efficient use of audit resources, we did not select agencies for which GAO had done work involving their financial management systems for our case study work performed in this review. We obtained and summarized information regarding these case study agencies from documentation provided by the agencies, such as capital asset plans and alternatives analyses. We also interviewed key agency officials involved with the implementations, including CFOs and project managers. We did not evaluate the effectiveness of the acquisition and implementation processes used by the case study agencies. In addition, we did not verify the accuracy of the data provided. To identify the benefits of, and key challenges that agency officials report as having an impact on, their efforts to modernize and migrate core financial systems to external providers, we reviewed and analyzed survey results from the 24 CFO Act agencies. In addition, we reviewed policies, guidance, reports, and memorandums obtained from the Office of Management and Budget (OMB), FSIO, the four selected case study agencies, the four OMB-designated federal shared service providers (SSP), two commercial vendors supporting migration activities at selected case study agencies, and prior GAO reports. The four OMB-designated federal SSPs were the Department of Transportation’s Enterprise Services Center, the Department of the Interior’s National Business Center, the Department of the Treasury’s Bureau of Public Debt’s Administrative Resource Center, and the General Services Administration’s Federal Integrated Solutions Center. We interviewed knowledgeable officials of these organizations, as well as a co-chair of the Small Agency Council Finance Committee and chairman of its Financial Systems Subcommittee (the CFO of the Equal Employment Opportunity Commission and Deputy CFO of the Federal Energy Regulatory Commission, respectively) and the team leader of the CFO Council’s FSIO Oversight Transformation Team concerning key factors that contribute to successful migrations and significant challenges that may affect migration efforts at agencies and external providers. We also interviewed key OMB officials, including the Controller and Deputy Controller of the Office of Federal Financial Management, to discuss these factors as well as governmentwide efforts toward migrating core financial systems to external providers and OMB’s newly announced policy and financial systems modernization approach (new approach). We obtained and reviewed recent policies and guidance issued by OMB, such as OMB Memorandum M-10-26 calling for an immediate review of financial systems projects. We analyzed OMB’s new approach, in relation to relevant laws, regulations, and guidance, including the Clinger-Cohen Act, the CFO Act, the Federal Financial Management Improvement Act of 1996 (FFMIA), OMB Circular No. A-127, OMB Circular No. A-130, and standards set by the Institute of Electrical and Electronic Engineers. We conducted this performance audit from June 2009 through September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We requested comments on a draft of this report from the Acting Director of OMB or his designee. We received oral and technical comments from the OMB Controller, which are discussed in the Agency Comments and Our Evaluation section and incorporated as appropriate. Tables 5 and 6 summarize responses received from CFO Act agencies concerning their core financial systems and efforts to migrate selected core financial system support services to external providers. The agencies completed separate questionnaires on each identified core financial system and the status of activities related to migrating information technology (IT) hosting, application management, and transaction processing services supporting these systems to external providers as of September 30, 2009. The status of agency migration activities and use of external providers are categorized as follows: Migrated - (provider). Agency has already migrated this service to a federal SSP or commercial provider as indicated. Planned - (provider). Agency has decided and planned to migrate this service to a selected federal SSP or commercial provider as indicated. Planned - (provider undetermined). Agency has decided to migrate this service but has not yet selected a provider. Undecided. Agency has not decided to migrate this service. Not planned. Agency does not plan to migrate this service to an external provider. Table 5 summarizes the results of the 24 CFO Act agency responses related to 46 current core financial systems, including 45 civilian systems and 1 defense system, that agency officials identified as being fully deployed as of September 30, 2009. Of these, 12 agencies reported that they have already migrated, or plan to migrate, IT hosting or application management core financial system support services to external providers for 16 systems. Further, 4 agencies reported that they rely on external providers for transaction processing services supporting 4 systems. In addition to completing separate questionnaires concerning current core financial systems that were fully deployed as of September 30, 2009, agencies completed separate questionnaires for 20 core financial systems, including 14 civilian and 6 defense systems, that they planned to fully deploy after that date, as shown in table 6. The surveys were conducted prior to the issuance of OMB’s new guidance. Accordingly, the impact, if any, of the new policy on agencies’ plans to deploy new core financial systems is not reflected in table 6. Some of these systems have already been partially deployed at bureaus or other subagency components within the agencies, and therefore some services may have already been migrated to an external provider even though full deployment had not yet occurred as of September 30, 2009. Of these systems, 10 agencies reported that they have already migrated, or plan to migrate, IT hosting and application management services supporting 10 systems; 4 agencies reported that they either do not plan, or had not yet made a decision, to migrate both these services supporting 4 systems; and 1 agency, the Department of Defense, reported that it did not plan to migrate these services for 4 planned systems and had migrated both services for 1 system and application management services for 1 system. In addition, 4 agencies reported that they plan to rely on external providers to provide transaction processing services supporting 4 planned systems. Additional information concerning selected federal agencies’ migration and modernization efforts is presented in this appendix. The four case study agencies are USDA, FCC, DOJ, and OPM. All four of these agencies reported similar reasons for undertaking efforts to modernize their core financial systems, including reliance on out-dated software that adversely affected their ability to meet financial management challenges, and had a goal of implementing a solution that will provide agencywide, streamlined, real-time accounting and reporting capability. We did not evaluate the effectiveness of the acquisition and implementation processes used by the case study agencies or verify the data provided. Planned software solution: SAP IT Hosting: USDA National Finance Center (NFC) USDA is taking steps to modernize its core financial systems using a solution based on SAP commercial off-the-shelf (COTS) software that is intended to provide agencywide online, real-time transaction capability and access. USDA’s Financial Management Modernization Initiative (FMMI) is intended to replace the Foundation Financial Information System (FFIS) and consolidate and eliminate multiple systems currently used in various USDA component agencies and staff offices. USDA launched FMMI after identifying the need to upgrade department and agency financial and administrative payment and program general ledger systems. In 2005, USDA began efforts to identify its new core financial system needs and took steps to determine what software and services could be provided by federal SSPs, private software vendors, and other commercial providers. Figure 1 and table 7 summarize the key migration and modernization activities used by USDA to identify and deploy a core financial system solution. FCC identified a need to modernize its core financial systems and selected a Web-based version of Momentum COTS software to provide agencywide online, real-time transaction capability and access. FCC’s planned new core financial system is also intended to interface electronically with common governmentwide software applications and to replace a number of peripheral supporting software applications currently in use at FCC. FCC’s Core Financial System Replacement Project is intended to replace the Federal Financial System (FFS), which is an older, nonintegrated system that relies on batch processing of transactions and is currently hosted by the Department of the Interior’s National Business Center (NBC). The new core financial system is planned to be used as the system of record for all external reporting requirements, including financial statement preparation access and processing. In 2005, FCC began efforts to identify its core financial system needs and took steps to determine what software and services could be provided by federal SSPs, private software vendors, and other external providers. Figure 2 and table 8 summarize the key modernization and migration activities taken by FCC to identify and plan a core financial system solution. DOJ is configuring its Unified Financial Management System (UFMS) to improve financial management and procurement operations across DOJ. UFMS is planned to replace six core financial management systems and multiple procurement systems currently operating across DOJ with an integrated COTS solution. According to officials, UFMS should allow DOJ to streamline and standardize business processes and procedures across all of its components, providing secure, accurate, timely, and useful financial data to financial and program managers across the department, and produce component- and department-level financial statements. In addition, the system is intended to assist DOJ by improving financial management performance and to aid departmental components in addressing the material weaknesses and nonconformances in internal controls, accounting standards, and systems security identified by DOJ’s Office of Inspector General. Finally, the system is intended to provide procurement functionality to streamline business processes, provide consolidated management information, and provide the capability to meet all mandatory requirements of the Federal Acquisition Regulation and the Justice Acquisition Regulations. In 2003, DOJ began efforts to identify its new core financial system needs and took steps to determine what software and services could be provided by private software vendors and other external providers. Figure 3 and table 9 summarize the key migration and modernization activities taken by DOJ to identify and deploy a core financial system solution. OPM is taking steps to modernize its core financial systems using a solution based on Oracle COTS software that is intended to provide agencywide online, real-time transaction capability and access. The Consolidated Business Information System (CBIS) is intended to consolidate and eliminate multiple systems currently used by OPM with the initial deployment October 1, 2009, replacing the Government Financial Information System (GFIS). GFIS included CGI Momentum, which is used for salaries and expenses, and a revolving fund. OPM deployed phase I, release 1 of CBIS to replace Momentum and, according to officials, plans to launch phase II to incorporate trust fund accounting are currently under review by OPM leadership. Under CBIS, OPM also replaced its contract administration software, Procurement Desktop, with the Compusearch PRISM solution during phase I, release 1. In 2005, OPM began efforts to identify its core financial system needs and took steps to determine what software and services could be provided by federal SSPs, private software vendors, and other external providers. Figure 4 and table 10 summarizes the key actions taken and challenges encountered by OPM in identifying a core financial system solution. OMB designated four federal entities—(1) the National Business Center of the Department of the Interior; (2) the Administrative Resource Center, Bureau of Public Debt, of the Department of the Treasury; (3) the Federal Integrated Solutions Center of the General Services Administration; and (4) the Enterprise Services Center of the Department of Transportation— as SSPs for federal financial management. All four SSPs offer IT hosting, application management, transaction processing, and system implementation services or have a structure for providing all four of these services. Although the SSPs offer the four basic services mentioned above, the specific services provided may vary based on the requirements, size, and complexity of the client agency. SSPs typically offer a range of the following four basic services: IT hosting services may include systems management and monitoring, disaster recovery, help desk, network security compliance and controls, and continuity of operations plans and testing. Application management services may include system/software administration, application configuration, application setup and security, user access and maintenance, configuration management, and coordination of application upgrades and fixes. Transaction processing services may include account maintenance and reconciliation, financial reporting, regulatory and managerial reporting, standard general ledger reconciliation, payment processing, billings and collections, accounts payable, accounts receivable, travel payments, relocation payments, budgetary transactions, and fixed asset accounting. System implementation services may include implementation and integration support services, requirements analysis, system conversions, project management, systems testing, change management, and training. To help monitor and measure the performance of selected external providers in connection with the financial management line of business (FMLOB) services they provide, SSPs and agencies rely on service-level agreements, which are binding agreements that define the specific level and quality of the operational and maintenance services that an external provider will provide to a customer agency and outline penalties and incentives that may arise from not performing or exceeding the expected service levels. The inclusion of appropriate and clearly defined performance measures and metrics in service-level agreements is important for ensuring the usefulness of this tool. OMB’s FMLOB Migration Planning Guidance defines the four service categories and related performance metrics. Although specific metrics included in service-level agreements are negotiated and may vary, examples of performance metrics related to the services described above include the following: (1) For IT hosting, system availability; average total response time for system components; resolution time for critical, high, medium, and low incidents; number of security incidents in the past year; and file recovery time. (2) For application management, average time to restore mission-critical application functionality; unplanned downtime; percentage of on-time upgrades; and average retrieval time for archived data. (3) For transaction processing, invoice process cycle time; percentage of financial transactions with errors; average business days to close the books; and number of business days to report after closing books. (4) For system implementation services, percentage of standard financial management system requirements fulfilled; percentage of satisfactory postimplementation survey responses; and reduction in help desk volume. SSPs are also required to operate and maintain a COTS software package in compliance with FSIO core financial system requirements. As shown in table 11, three of the four SSPs use the Oracle software package, while two of the four use a Momentum software package. One SSP offers SAP software. In addition to the software offered by each SSP, table 11 also provides an overview of key characteristics of the four OMB-designated federal SSPs including detailed information regarding the number of full- time equivalent staff dedicated to providing financial management services and the clients they serve. The selected characteristics provide context for the financial management systems-related operations of the four federal SSPs. This appendix includes additional details on the key benefits and challenges of agencies migrating their core financial systems to external providers for IT hosting, application management, and transaction processing. The potential benefits and challenges include those reported by the CFO Act agencies in response to our survey. The surveys were conducted prior to the issuance of OMB’s new guidance. Accordingly, the effect, if any, of the new policy is not reflected in agencies’ responses. Non-CFO Act agencies’ use of external providers also highlights potential benefits and challenges. While external providers cited efforts to address agency concerns, they also highlighted their own concerns and challenges with agency migrations. We also noted other migration challenges related to OMB’s guidance on competitions. Based on survey responses concerning the potential advantages and disadvantages of migrating core financial system support services, 16 of the 24 (67 percent) CFO Act agencies believe that the benefits of migrating IT hosting greatly or somewhat outweighed their concerns, while 14 of 24 (58 percent) reported similar perceptions concerning the benefits of migrating application management services to external providers. In comparison, as shown in table 12, the responses indicated the perception that potential disadvantages outweigh any potential advantages of migrating transaction processing services to an external provider for 10 of the 24 (42 percent) CFO Act agencies. According to CFO Act agency responses to our survey, some of the potential benefits of migrating the IT hosting, application management, and transaction processing services for agencies’ core financial systems to external providers include the following: Potential cost savings through shared resources. For example, the Nuclear Regulatory Commission cited cost reductions in equipment purchase and maintenance costs, as well as the number of staff needed to maintain the application and process transactions. Allowing agency to focus on mission. For example, OPM stated that migrating its financial management system to an IT hosting provider enables OPM to extricate itself from the business of managing financial systems, transfers some of the risk associated with implementing and maintaining the system, and allows the CFO organization to concentrate on its goal of providing strategic direction based on financial data. Greater efficiency and reliability through experienced staff. For example, the Department of Transportation stated in its survey response that benefits include having a provider that has experience with the specific equipment, operations, and maintenance required by the hosted application. According to Small Agency Council officials, small agencies are more likely to migrate to external providers because they do not have sufficient resources to support infrastructures required to operate and maintain core financial systems. For example, according to one federal SSP, many of its clients consist of small commissions, such as the Election Assistance Commission, that rely on the “end-to-end” services the SSP provides. Further, according to officials at the four federal SSPs, their efforts toward acquiring additional clients are primarily focused on small to midsized agencies that may lack sufficient resources or expertise to meet their core financial system needs. The following summarizes the key reported benefits for non-CFO Act agencies. Potential cost savings through shared resources. Based on information provided by SSP officials, their clients share the same instance of core financial software hosted and maintained by SSPs with eight or more other clients, on average. Federal SSP officials stated that the use of shared instances and other tools, such as standard interfaces that facilitate the exchange of data between core financial systems and other systems, enables agencies to realize significant cost savings by spreading IT hosting, maintenance, and other related costs among multiple clients. Greater efficiency and reliability. According to FCC officials, FCC is currently modernizing its core financial system and is migrating to a commercial provider to take advantage of the provider’s expertise in acquiring and maintaining the latest technology to meet FCC’s needs. Further, since federal SSPs process transactions for multiple agencies, they are able to devote more resources toward processing complex transactions than smaller agencies that may not individually be required to handle such transactions on a regular basis. For example, according to one SSP, although many agencies do not process a large number of transactions involving employees’ permanent changes in duty stations, the SSP maintains the expertise and capability to efficiently process these complex transactions on a regular basis because of the volume it is required to handle on behalf of all its clients. Enhanced compliance with federal standards. External providers are working to incorporate changes in software to facilitate agencies’ efforts to comply with new standards and requirements, such as the Common Governmentwide Accounting Classification (CGAC) and other recently issued standard business processes. According to a federal SSP official, having a limited number of providers incorporate these changes into a common solution shared by multiple agencies, rather than each agency spending valuable resources to accomplish the same objective on its own, represents a significant advantage for the agencies relying on the shared solutions the SSP provides. CFO Act agencies highlighted system implementation disciplined processes, along with reengineering their business processes, among the greatest modernization challenges they face. Additional information on these and other reported key challenges affecting CFO Act agency modernization and migration efforts can be found in table 13. The following summarizes key examples of CFO Act agency survey and case study results related to challenges associated with migrating IT hosting, application management, and transaction processing to external providers. Department of Health and Human Services officials expressed concerns about the loss of control and risks associated with allowing another entity to manage or host the infrastructure on which an agency’s critical data reside, which could become impaired or compromised. Agencies cited concerns with the loss of flexibility associated with using the same setup and configurations across agencies in order to achieve efficiencies and cost savings governmentwide. In addition, agencies stated that they were reluctant to forgo their established business processes, noting that they would lose the benefits associated with their unique business processes and the technical expertise of internal staff who support and use them. For example, the Department of Energy cited concerns with losing agency capabilities and subject matter expertise and becoming totally reliant on the service provider. Case study agency officials expressed concerns that although COTS products help enable agencies to use common platforms to modernize their core financial systems, the products need additional enhancements to help meet common agency needs. For example, these officials identified a need for (1) enhancements that effectively address new governmentwide CGAC and FSIO standard business processes and agency budgetary reporting needs and (2) common interfaces that facilitate the exchange of financial data between agency core financial systems and governmentwide systems, such as the FedDebt system. Further, recognizing that unreconciled intragovernmental information continues to impede the preparation of the federal government financial statements each year, they stated that intragovernmental transaction processing should be further clarified. Case study agency officials stated that their agencies each worked individually with selected COTS vendors to produce enhanced solutions to meet their needs. For example, the case study agencies noted that they have had to develop interfaces to existing solutions such as payroll, travel, reporting, and FedDebt that should already be part of a standard configuration. Agency officials were unable to specify the portions of their modernization costs that are specifically attributable to meeting software and configuration needs they have in common with other agencies. Although external providers acknowledged agency migration concerns and stated that they were taking steps to address them, they cited additional challenges affecting their migration-related efforts. For example, external provider officials stated that overcoming agencies’ resistance to adapting their business processes to those used by external providers is a significant challenge. Further, according to one SSP official, although OMB had a goal of migrating agencies to a limited number of stable and high-performing providers, it lacked a clear mechanism for enforcing agencies, especially large agencies, migrate to an external provider in a manner consistent with the goals of the FMLOB initiative. Specifically, based on survey responses, CFO Act agencies reported that they were relying, or planning to rely, on a total of 6 different external providers for IT hosting and application management services supporting their planned systems and a total of 12 different providers to provide these services for their current systems. We also noted other challenges related to OMB’s Competition Framework that affect agency and external provider migration efforts. OMB’s Competition Framework, as well as revisions made to OMB Circular No. A-127, require agencies to conduct competitions among external providers to help evaluate different options available for meeting their needs. The following is a summary of these reported challenges. According to one federal SSP, some agency solicitations for shared services consist of lengthy, detailed requirements and other information that can sometimes result in unnecessarily expensive and time-consuming efforts to review and provide required responses. Federal SSP officials noted that the federal government may spend a significant amount of federal funds on demonstrations, especially in a situation where all four SSPs respond to a request for a demonstration from a single agency. Moreover, officials at SSPs also expressed concerns about the significant challenges they face in competing with commercial vendors and acquiring additional clients because of the limited resources they can devote to such activities. Federal SSP officials stated that full cost recovery requirements associated with being a franchise fund or working capital fund place federal SSPs at an inherent disadvantage when competing against commercial vendors under OMB’s Competition Framework. According to federal SSP officials, they may not bid on agency solicitations that would involve significant start-up costs to meet an agency’s unique needs if doing so would not also benefit other clients they serve that would also bear a portion of these costs. These officials also stated that commercial vendors have more flexibility to price their bids aggressively in early years to acquire additional business and rely on efforts to recoup their costs in subsequent years. External providers also reported seeing an increase in agencies’ desire to use firm-fixed price contracts and include performance incentives and disincentives in service-level agreements which, according to SSP officials, are difficult for them to accommodate because of full cost recovery requirements. According to DOJ officials, DOJ did not conduct a competition because the department determined that federal SSPs could not accommodate its capacity, security requirements, and unique accounting needs based on limited information received about SSP capabilities and costs during preliminary planning discussions related to its financial management system modernization effort. However, DOJ officials acknowledged that they did not receive sufficient information to fully evaluate the capabilities of the federal SSPs and stated that they were not sure whether all aspects of their preliminary determination would hold true if more research were conducted and SSP capabilities had improved. In addition to the contacts named above, individuals who made major contributions to this report were Chris Martin, Senior-Level Technologist; Michael LaForge, Assistant Director; Jehan Abdel-Gawad; Lauren Catchpole; Francine DelVecchio; F. Abe Dymond; Latasha Freeman; Wilfred Holloway; Jim Kernen; Theresa Patrizio; Carl Ramirez; Jerome Sandau; Pamela Valentine; and Carolyn Voltz.
In 2004, the Office of Management and Budget (OMB) launched the financial management line of business (FMLOB) initiative, in part, to reduce the cost and improve the quality and performance of federal financial management systems by leveraging shared services available from external providers. In response to a request to study FMLOB-related issues, this report (1) identifies the steps agencies have taken, or planned to take, to modernizing their core financial systems and migrate to an external provider and (2) assesses the reported benefits and significant challenges associated with migrations, including any factors related to OMB's new financial systems modernization approach. GAO's methodology included surveying federal agencies to obtain the status of their financial management systems as of September 30, 2009 (prior to OMB's March 2010 announcement of a new approach), and interviewing officials with selected agencies, external providers, and OMB. In oral comments on a draft of this report, OMB stated its position that it was too early for GAO to draw conclusions on its new approach because it is still a work in progress. For this reason, GAO is not making any new recommendations. However, GAO observes that the experience and challenges related to prior migration and modernization efforts offer important lessons learned as OMB continues to develop and implement its new approach. In an effort to capitalize on new technologies to help address serious weaknesses in financial management and help meet their future financial management needs, federal agencies continued to modernize their core financial systems, which often has led to large-scale, multiyear financial system implementation efforts. For the last 6 years, OMB has promoted the use of shared services as a means to more efficiently and effectively meet agency core financial system needs. Overall, 14 of 23 civilian Chief Financial Officer (CFO) Act agencies are planning to complete their efforts to deploy 14 new core financial systems at various times through fiscal year 2018, and in connection with their modernization efforts, 10 of the 14 agencies are migrating, or planning to migrate, hosting and application management support services to external providers. GAO also found that the CFO Act agencies were not using a limited number of external providers, a critical element of OMB's original approach. Five of the 10 agencies planned to rely on five different commercial providers, while 2 of the 10 planned to rely on the same federal provider and 3 had not determined the provider. In contrast, smaller agencies were more frequently relying on the four federal shared service providers to provide core financial system support services to leverage the benefits of using external providers. The most common benefits of migrating cited by CFO Act agencies were external providers' expertise, the potential for cost savings, and the agencies' ability to focus more on mission-related responsibilities. However, CFO Act agencies and external providers cited various challenges affecting modernization and migration efforts, such as reengineering business processes and the ability of external providers to provide specific solutions that meet complex agency needs. In March 2010, OMB announced a new financial systems modernization approach that focuses on the use of common automated solutions for transaction processing, such as invoicing and intergovernmental transactions. OMB issued a memorandum in June 2010 that included guidance for key elements of its new approach, such as agencies splitting financial system modernization projects into smaller segments. This new guidance also requires CFO Act agencies to halt certain modernization projects, pending OMB review and approval of revised project plans. Important aspects of the new approach have not yet been developed or articulated and OMB has stated that it plans to develop additional guidance. In GAO's view, it is critical that OMB's new guidance elaborate on the new approach and address key issues such as goals and performance plans clearly linked to strategies for achieving them, a governance structure, and specific criteria for evaluating projects. GAO believes these issues need to be addressed to reduce risks and help ensure successful outcomes as OMB moves forward with its new approach. GAO will continue to work with OMB to monitor the implementation of its new approach.
Medicaid was established in 1965 by Title XIX of the Social Security Act as a joint federal–state program to finance health care for certain low- income, aged, or disabled individuals. Medicaid is an entitlement program, under which the federal government pays its share of expenditures for any necessary, covered service for eligible individuals under each state’s federally approved Medicaid plan, as described below. States pay qualified health-care providers for covered services provided to eligible beneficiaries and then seek reimbursement for the federal share of those payments. Title XIX of the Social Security Act allows flexibility in the states’ Medicaid plans. Although the federal government establishes broad federal requirements for the Medicaid program, states can elect to cover a range of optional populations and benefits. Guidelines established by federal statutes, regulations, and policies allow each state some flexibility to (1) broaden eligibility standards; (2) determine the type, amount, duration, and scope of services; (3) set the rate of payment for services; and (4) administer its own program, including processing and monitoring of medical claims and payment of claims. Differences in program design can lead to differences in state programs’ vulnerabilities to improper payments and state approaches to protecting the program. States are required to submit plans to CMS to outline their plans to verify Medicaid eligibility factors, including income, residency, age, Social Security numbers (SSN), citizenship, and household composition. With more than 50 distinct state- based programs that are partially federally financed, overseeing Medicaid is a complex challenge for CMS and states. In order to participate in Medicaid, federal law requires states to cover certain population groups (mandatory-eligibility groups) and gives the states the flexibility to cover other population groups (optional-eligibility groups). States set individual eligibility criteria within federal minimum standards. There are other nonfinancial eligibility criteria that are used in determining Medicaid eligibility. In order to be eligible for Medicaid, individuals need to satisfy federal and state requirements regarding residency, immigration status, and documentation of U.S. citizenship. Beginning in October 2013, states were required to use available electronic data sources to confirm information included on the application, while minimizing the amount of paper documentation that consumers need to provide. As of March 25, 2011, federal regulations require that certain ordering and referring physicians or other professionals providing services under the state plan or under a waiver of the plan must be enrolled as participating providers, which includes screening the providers upon initial enrollment and when follow-up verification occurs (at least every 5 years). The follow-up verification is referred to as revalidation or reenrollment. As part of the enrollment process, and depending on the provider’s risk level, states may be required to collect certain information about the providers’ ownership interests and criminal background, search exclusion and debarment lists, and take action to exclude those providers who appear on those lists. When state officials discover potentially fraudulent activity in the enrollment process, states must refer that activity or providers to law-enforcement entities for investigation and possible prosecution. In May 2014 we reported that states have historically provided Medicaid benefits using a fee-for-service system, in which health-care providers are paid for each service. However, according to CMS, in the past 15 years, states have more frequently implemented a managed-care delivery system for Medicaid benefits. In a managed-care delivery system, beneficiaries obtain some portion of their Medicaid services from an organization under contract with the state, and payments to MCOs are typically made on a predetermined, per person, per month basis. Currently, two-thirds of Medicaid beneficiaries receive some of their services from MCOs, and many states are expanding their use of managed care to additional geographic areas and Medicaid populations.According to HHS, approximately 27 percent, or $74.7 billion, of nationwide federal Medicaid expenditures in fiscal year 2011 (the fiscal year our review focused on) were attributable to Medicaid managed care. States oversee MCOs that provide care to Medicaid beneficiaries through contracts and reporting requirements, which may include identifying improper payments to providers within their plans. Pub. L. No. 109-171, § 6034, 120 Stat. 4, 74 (2006) (codified at 42 U.S.C. § 1396u-6). In September 2014, the Center for Program Integrity was reorganized to integrate the Medicare and Medicaid program-integrity functions across the Center for Program Integrity, so that all Center for Program Integrity units are focused on both programs. To achieve Medicare–Medicaid integration, the Medicaid Integrity Group was also reorganized and integrated with Medicare staff so that the Medicaid Integrity Group no longer exists as a separate identifiable unit. states through the Medicaid Integrity Institute in April 2014. HHS OIG oversees Medicaid program integrity through its audits, investigations, and program evaluations. It is also responsible for enforcing certain civil and administrative health-care fraud laws. States have primary responsibility for reducing, identifying, and recovering improper payments. Of the approximately 9.2 million beneficiaries in the four states that we examined, thousands of cases from the fiscal year 2011 data analyzed showed indications of potentially improper payments, including fraud, to Medicaid beneficiaries and providers. The numbers on beneficiaries and providers may not reflect the total incidence of potentially improper payments, including fraud, because it was not possible to fully investigate claims that did not have a valid SSN. For example, we were unable to match beneficiaries and providers without valid SSNs to the full DMF, making it difficult to fully investigate such cases for other indicators of improper payments or fraud. In addition to beneficiaries, we found hundreds of Medicaid providers who were potentially improperly receiving Medicaid payments. As described below, these cases show indications of certain types of fraud or improper benefits. Providers with suspended or revoked medical licenses. All physicians applying to participate in state Medicaid programs must hold a current, active license in each state in which they practice. During enrollment, states are required to screen out-of-state licenses to confirm the license has not expired and that there are no current limitations on the license. Additionally, states are required to provide CMS with information and access to certain information respecting sanctions taken against health-care practitioners and providers by their own licensing authorities. Using data from the Federation of State Medical Boards (FSMB), we found that approximately 90 medical providers in the four selected states had their medical licenses revoked or suspended in the state in which they received payment from Medicaid during fiscal year 2011. Medicaid approved the associated claims of these cases at a cost of at least $2.8 million. Invalid addresses for providers. A drop-box or mailbox scheme is a common fraud scheme in which a fraud perpetrator will set up a medical-oriented business and will use a CMRA as his or her official address. The four states we examined for our review required providers to provide the physical service location of their business when they apply to provide Medicaid services. Our analysis matching Medicaid data to USPS address-management tool data found that at least 220 providers may have inappropriately used a virtual address as their physical service location. Specifically, these providers used a CMRA address as their physical service location. For these providers, Medicaid approved claims of at least $318,000. Additionally, our analysis found nearly 26,600 providers with addresses that did not match any USPS records. These unknown addresses may have errors due to inaccurate data entry or differences in the ages of MMIS and USPS address-management tool data, making it difficult to determine whether these cases involve fraud through data matching alone. Our analysis also identified 47 providers with foreign addresses as their location of business. These providers had addresses in Canada, China, India, and Saudi Arabia. Our analysis found that 8 of the 47 providers with foreign addresses had been paid over $90,000 in Medicaid claims during fiscal year 2011. In December 2010, CMS released guidance on implementing the Patient Protection and Affordable Care Act (PPACA) provisions prohibiting payments to institutions or entities located outside of the United States. CMS’s guidance went into effect on June 1, 2011. Approximately 28 percent of the claims we identified occurred after CMS’s guidance went into effect. Deceased providers. We identified over 50 deceased providers in the four states we examined whose identities received Medicaid payments. Our analysis matching Medicaid eligibility and claims data to SSA’s full DMF found these individuals were deceased before the Medicaid service was provided. The Medicaid benefits involved with these deceased providers totaled at least $240,000 for fiscal year 2011. These benefits are an indication of improper or potentially fraudulent payments. Excluded providers. We found that about 50 providers in the four states we examined had been excluded from federal health-care programs, including Medicaid; these providers were excluded from these programs when they billed for Medicaid services during fiscal year 2011. The selected states paid the claims at a cost of about $60,000. The federal government can exclude health-care providers from participating in the Medicaid program for several reasons. Excluded providers can be placed on one or both of the following exclusion lists, which state Medicaid officials must check no less frequently than monthly: the List of Excluded Individuals and Entities (LEIE), managed by HHS, and the System for Award Management (SAM), managed by GSA. The LEIE provides information on health- care providers that are excluded from participation in Medicare, Medicaid, and other federal health-care programs because of criminal convictions related to Medicare or state health programs or other major problems related to health care (e.g., patient abuse or neglect). SAM provides information on individuals or entities that are excluded from participating in any other federal procurement or nonprocurement activity. Federal agencies can place individuals or entities on SAM for a variety of reasons, including fraud, theft, bribery, and tax evasion. On the basis of our matching of state prison data to Medicaid claims data, we found that 16 providers in the selected states were incarcerated in state prisons at some point in fiscal year 2011. The offenses that led to incarceration included drug possession, drug trafficking, money laundering, racketeering, and murder. We did not identify any Medicaid claims associated with these providers while they were incarcerated. Through regulation, CMS has taken steps since 2011 to make the Medicaid enrollment-verification process more data-driven. The steps may address many of the improper-payment indicators that were found in our 2011 analysis of Medicaid claims; specifically, CMS took regulatory action to enhance beneficiary-screening procedures in 2013 and provider- screening procedures in 2011. However, gaps in guidance and data sharing continue to exist, and additional opportunities for improvements are available for screening beneficiaries and providers. In response to PPACA, which was enacted in 2010, CMS issued federal regulations in 2013 to establish a more-rigorous approach to verify financial and nonfinancial information needed to determine Medicaid beneficiary eligibility. Specifically, under these regulations, states are required to use electronic data maintained by the federal government to the extent that such information may be useful in verifying eligibility. CMS created a tool called the Data Services Hub (hub) that was implemented in fiscal year 2014 to help verify beneficiary applicant information used to determine eligibility for enrollment in qualified health plans and insurance- affordability programs, including Medicaid. The hub routes to and verifies application information in various external data sources, such as SSA and the Department of Homeland Security. According to CMS, the hub can verify key application information, including household income and size, citizenship, state residency, incarceration status, and immigration status. If properly implemented by CMS, the hub can help mitigate some of the potential improper-payment issues that we identified earlier in our analysis of fiscal year 2011 Medicaid claims including state residencies, deceased beneficiaries, and incarcerated beneficiaries. Figure 1 shows beneficiary enrollment procedures that states are required to follow beginning in October 2013. Under CMS’s regulations, when states receive an application they are to use the hub to verify an individual’s eligibility. available in the hub, or if there is missing information on the application, the state must use other data sources to determine an individual’s eligibility.all available electronic data resources before contacting an applicant directly. Under 42 C.F.R. § 435.945(k), subject to approval by the Secretary, states may request and use information from alternate sources, provided that such alternative source or mechanism will reduce the administrative costs and burdens on individuals and states while maximizing accuracy, minimizing delay, meeting applicable requirements relating to the confidentiality, disclosure, maintenance, or use of information, and promoting coordination with other insurance-affordability programs. The data used for our study are from fiscal year 2011, approximately 3 years prior to implementation of the CMS hub requirement. Medicaid services to individuals are to cease once a beneficiary dies. Under CMS regulations, states are to screen beneficiaries through the hub, which includes a check using the full DMF to determine whether they are deceased, at the time of initial enrollment as well as on at least an annual basis thereafter. Hence, the extent to which the hub identifies deceased individuals in Medicaid is generally limited to about once every year. To supplement the death verification check from the hub, states may use other electronic resources they have available, such as state vital records, to identify deceased beneficiaries. While officials at the four states we examined said that they periodically check the state vital records to determine whether a potential Medicaid beneficiary has died, the four states did not use the more-comprehensive full DMF to perform this check outside of the initial enrollment or annual revalidation period. As discussed earlier, and highlighted in table 1, we used the full DMF to identify approximately 200 incidents of potential fraud in these four states in fiscal year 2011. Without using information from the full DMF, states can generally only detect deaths within the state’s borders and not prevent or detect benefit payments made for individuals who had their deaths recorded in other states’ vital records. Additionally, we previously reported the full DMF contained approximately 40 percent more records than the public DMF for deaths reported in 2012 alone. Moreover, in March 2015, we reported that while verifying eligibility using SSA’s death data can be an effective tool to help prevent improper payments to deceased individuals or those that use their identities, agencies may not be obtaining accurate data because of weaknesses in how these data are received and managed by SSA. According to CMS officials, many state Medicaid agencies have long- standing policies of using data matches against both SSA and state vital statistics to identify deceased individuals. SSA has made the full DMF available through the hub for the states’ annual redetermination and also has agreements in place to provide death indicators based on the full DMF to states. In commenting on the draft of this report, SSA officials stated that the agency also provides the full DMF to CMS. Thus, states should be able to access this death information directly from CMS, according to SSA. While the federal regulation requires states to check the hub for such items as citizenship and incarceration, CMS officials noted that the federal regulation does not specify how deceased individuals should be identified nor has CMS explored the feasibility of states using the full DMF in the periodic screening for deceased individuals, outside of the initial enrollment or the annual revalidation period. As a result, states may not be able to detect individuals who have moved to and died in other states and prevent payment of potentially fraudulent benefits. PPACA authorized CMS to implement several actions to strengthen provider-enrollment screening. CMS and HHS OIG issued a final rule in February 2011, effective March 2011, to implement many of the new screening procedures. This final rule, if properly implemented, will address some of the issues that we found in our analysis of fiscal year 2011 data, such as screening of excluded providers. As shown in figure 2, to enroll in Medicaid directly with the state, providers must apply to the state Medicaid office. While PPACA requires that all providers and suppliers be subject to licensure checks, it gave CMS discretion to establish a risk-based application of other screening procedures. As part of the February 2011 regulation, CMS determined that states must continue to verify providers and suppliers using various data sources, such as the full DMF, National Plan and Provider Enumeration System, LEIE, and SAM. According to CMS’s risk-based screening, moderate- and high-risk providers and suppliers additionally must undergo unscheduled or unannounced site visits, while high-risk providers and suppliers also will be subject to fingerprint-based criminal- background checks. This requirement may address some of the potentially fraudulent or improper payments highlighted in table 2, including approximately 200 providers with a CMRA or foreign address. Additionally, the regulations require the state Medicaid agency to revalidate providers at least every 5 years. Because the regulation was effective in March 2011, the states are required to complete revalidation for Medicaid providers in their states by March 2016. We found that the states in our review had different methods for identifying deceased providers. Specifically, according to officials in one state we examined, Arizona, the state uses the public DMF to periodically screen providers. Michigan uses a private-company dataset in monitoring providers for, among other things, deaths; however, the dataset used is not the full DMF but the public DMF, which excludes state-reported death data. New Jersey officials stated that they use a different source of death data—an Internet genealogy website—to check for deceased providers during the application process. According to the genealogy website, it includes deaths from SSA through 2011 and contains updated obituaries from newspapers. In addition, according to HHS, providers must hold a valid professional license before enrolling in Medicaid. CMS regulations require states to verify licenses in states in which the provider is enrolling and in each of the other states in which the provider purports to be licensed, as well. Two states we examined, Arizona and Michigan, review licenses throughout the country. Arizona uses the National Practitioner Data Bank for license verification. The National Practitioner Data Bank is an HHS nationwide system that is primarily an alert or flagging system intended to facilitate a comprehensive review of the professional credentials of health-care practitioners, health-care entities, providers, and suppliers. The National Practitioner Data Bank contains adverse actions including certain licensure, clinical privileges, and professional-society membership actions, as well as Drug Enforcement Administration controlled-substance registration actions, and exclusions from participation in Medicare, Medicaid, and other federal health-care programs. Michigan, on the other hand, uses a private-company dataset that periodically monitors providers for licenses and licensure actions. New Jersey and Florida both screen the providers within their states, as required. However, neither state uses a nationwide system, such as FSMB or the National Practitioner Data Bank, to validate licenses or determine whether the provider has been sanctioned.verification, which is allowable under Medicaid, all four states periodically reviewed licenses to ensure that providers are licensed to practice medicine in their states to meet the CMS requirement. According to CMS’s February 2011 regulation, ordering and referring providers participating in Medicaid in a risk-based managed-care environment are not required to enroll in Medicaid, and therefore are not subject to screening provisions discussed previously. As explained in its final rule, HHS did not require Medicaid managed-care providers to enroll with Medicaid programs because doing so would have resulted in unequal treatment of managed-care providers under the Medicare program, which does not require managed-care providers to enroll. Although not required, HHS officials stated that they do encourage states to screen managed-care network providers. In this regard, in May 2014, we reported that neither state nor federal entities are well positioned to identify improper payments made to MCOs, nor are they able to ensure that MCOs are taking appropriate actions to identify, prevent, or discourage improper payments.improving federal and state efforts to strengthen Medicaid managed-care program integrity takes on greater urgency as states that choose to expand their Medicaid programs under PPACA are likely to do so with managed-care arrangements, and will receive a 100 percent federal match for newly eligible individuals from 2014 through 2016. As we reported in May 2014, unless CMS takes a larger role in holding states accountable, and provides guidance and support to states to ensure adequate program-integrity efforts in Medicaid managed care, the gap between state and federal efforts to monitor managed-care program integrity will leave a growing portion of federal Medicaid dollars vulnerable to improper payments. In the May 2014 report, we recommended that CMS increase its oversight of program-integrity efforts by requiring, in part, that CMS update its guidance on Medicaid managed-care program We stated that integrity. In May 2014, HHS agreed with our recommendation, but as of February 2015 had not issued new guidance. Officials in Arizona, Florida, and Michigan said that their respective states require that all managed-care network providers enroll or register with the state Medicaid agency. We believe this standardization potentially eliminates discrepancies found in states when the credentialing standards for the managed-care network may differ from the state’s enrollment processes, and the state relies on contracted MCOs to collect network- provider disclosures, check providers and affiliated parties for exclusions, and oversee other aspects of the provider-enrollment process. Thus, by requiring that all MCO providers be enrolled directly with the states, those three states maintain centralized control over the screening and registration process and may be better positioned to ensure the integrity of their Medicaid programs. We have found that fraud prevention is the most efficient and effective means to minimize fraud, waste, and abuse rather than trying to recover payments once they are made. Thus, controls that prevent potentially fraudulent health-care providers from entering the Medicaid program or submitting claims are the most-important element in an effective fraud- prevention program. Effective fraud-prevention controls require that, where appropriate, organizations enter into data-sharing arrangements with each other to perform validation. System edit checks (i.e., built-in electronic controls) are also crucial in identifying and rejecting potentially fraudulent enrollment applications. Although CMS has taken steps through its program regulations in providing guidance to states for screening providers, the states we examined reported difficulties in implementing the regulations. One provision in the 2011 HHS regulation allowed states to rely on the results of provider screening by Medicare contractors to determine provider eligibility for Medicaid. According to HHS, this provision would eliminate additional screening and enrollment requirements for Medicaid providers, and also eliminate additional costs and burdens for separate screening for state Medicaid programs. To administer the provider screening, application fee, and revalidation requirements successfully, as specified in federal regulations, CMS determined that states must have access to Medicare enrollment data to determine whether a provider is currently enrolled in the Medicare program, has been denied enrollment, or is currently enrolling. According to CMS, in April 2012, CMS established a process by which states would have direct access to Medicare’s enrollment database—the Provider Enrollment, Chain and Ownership System (PECOS). Each state is given “read only,” manual access to PECOS. CMS provided the states access to PECOS in hopes that the states will be able to use these data in minimizing the amount of screening and costs that are associated with providers that are already enrolled in Medicare. However, according to our discussions with officials in the four selected states, the states are using PECOS to screen a segment of their provider population but none currently utilize PECOS for their entire provider population. Arizona officials stated that they use PECOS in the screening of out-of-state providers. Michigan officials stated that they use PECOS on medium- or high-risk providers to determine whether a site visit is warranted. New Jersey officials stated they use PECOS to confirm an out-of-state provider’s Medicare provider status and view the results of the most-recent site-visit inspection. Florida officials said that they do not screen all providers using PECOS. With regard to using PECOS for all Medicaid providers in their screening processes, we determined the following: State officials told us that PECOS required manual lookups of individual providers, a task that one state characterized as inefficient and administratively burdensome. According to CMS officials, as of October 2013, CMS began providing all interested states access to a monthly PECOS data-extract file that contains basic Medicare enrollment information; the state officials we interviewed were unaware that they could obtain automated data extracts from PECOS. Additionally, state officials from Florida, Michigan, and New Jersey said that they use a limited amount of pertinent information, specifically site-visit information, from PECOS to perform the necessary provider screening. However, there is additional information in PECOS, such as ownership information, that is necessary for state Medicaid agencies to screen providers properly and that is not included in the information that they use. Only Arizona officials stated they are able to utilize PECOS ownership information for providers. According to CMS officials, ownership information on providers can be obtained through a detailed-level view of PECOS. However, CMS has not made ownership information available to the states through the monthly PECOS data-extract file. Some state officials noted that full electronic access to all information in the PECOS system would streamline provider-screening efforts, resulting in a more-efficient and more-effective process. Additional CMS guidance to the states on requesting automated information through PECOS and ensuring that such information includes key ownership information could help states improve efficiency of provider screening. The Medicaid program is a significant expenditure for the federal government and the states, representing over $310 billion in federal outlays in fiscal year 2014. Because of the size and continued expansion of the Medicaid program, it is important that the federal government and the states continue to find ways to prevent and reduce improper payments, including fraud, in the program. Since 2011, CMS has taken steps to strengthen Medicaid beneficiary and provider enrollment- screening controls. As part of this ongoing endeavor, increasing information and data-sharing efforts between the federal government and state Medicaid programs could help enhance efforts to identify improper payments and potentially fraudulent activities. As the federal overseer of the Medicaid program, CMS is well positioned to provide additional guidance on accessing information in federal databases, such as SSA information about deceased individuals and automated information on providers through Medicare’s enrollment database—the Provider Enrollment, Chain and Ownership System (PECOS)—that would help identify and prevent benefits and payments to those individuals and providers who are ineligible to participate in Medicaid. To further improve efforts to limit improper payments, including fraud, in the Medicaid program, we recommend that the Acting Administrator of CMS take the following two actions: issue guidance to states to better identify beneficiaries who are deceased; and provide guidance to states on the availability of automated information through Medicare’s enrollment database—the Provider Enrollment, Chain and Ownership System (PECOS)—and full access to all pertinent PECOS information, such as ownership information, to help screen Medicaid providers more efficiently and effectively. We provided a draft copy of this report to HHS, SSA, and state Medicaid program offices for Arizona, Florida, Michigan, and New Jersey. Written comments from HHS, SSA, the Arizona Health Care Cost Containment System (AHCCCS), the Florida Agency for Healthcare Administration, and the Michigan Department of Community Health are summarized below and reprinted in appendixes II–VI. HHS concurred with our recommendations. SSA did not comment on the findings and recommendations but provided clarifying comments on the full DMF. AHCCCS disagreed with out methodology and provided detailed comments on our findings, as described below. The Florida Agency for Healthcare Administration said it supports our efforts to identify provider and beneficiary fraud. The Michigan Department of Community Health agreed with our findings and supports our recommendations. In an e-mail received on March 24, 2015, the Chief of Investigations of the New Jersey Office of the State Comptroller, Medicaid Fraud Division, did not provide comments on the findings but provided a technical comment, which we incorporated as appropriate. The Florida Department of Children and Families also provided technical comments, which we incorporated as appropriate. HHS concurred with both of our recommendations. Regarding our first recommendation, to issue guidance to states to better identify beneficiaries who are deceased, HHS stated that it will work with states to determine additional approaches to better identify deceased beneficiaries and continue to provide state-specific technical assistance as needed. In response to our second recommendation, HHS indicated that it will continue to educate states about the availability of PECOS information and how to use that information to help screen Medicaid providers more effectively and efficiently. HHS also outlined steps the agency has taken to address beneficiary and provider eligibility fraud since fiscal year 2011—the time frame for the data used in our study—many of which were mentioned in our report. As described in our report, we used fiscal year 2011 data because it was the most-recent consistently comparable data available. In its written comments, SSA did not comment on the report’s findings and recommendations but provided clarifying information regarding access to the full Death Master File (DMF), which we incorporated as appropriate. Additionally, SSA stated that CMS already has access to the full DMF and can share that information with states to ensure proper payment of Medicaid benefits. We believe that such action by CMS could address our first recommendation. In its written comments, AHCCCS said that it takes exception to being included in a series of findings that are global in nature and offer no state- specific detail. As we noted in our meetings with all state agencies included in our study, we did not provide state-level detail for two primary reasons. First, because CMS was the audited agency for our work, conducting analysis at the state-level would be outside the scope of our work and would put the focus on a comparison between the states, rather than on CMS oversight. In addition, due to the age and limitations of the data, as noted in the report, we would not be referring specific cases for follow-up. AHCCCS further stated that our report contained misstatements that cannot be attributed to either state. Because AHCCCS did not provide any examples, we cannot address this assertion but stand by the findings and recommendations in our report. AHCCCS also stated that most of the findings on our report are derived from data sources that are considered unreliable. In our report, we outline the steps we took to assess the reliability of our data and determine that they were sufficiently reliable for performing our work. Additionally, we note the key limitations of the data sources we use for our report and provide the appropriate caveats, as applicable, for the findings from our data analysis. Further, AHCCCS uses several of the same data sources for its eligibility screening as we used in our report. For example, AHCCCS notes that Arizona has found that the SSA death file is unreliable. It further notes that it uses SSA’s real-time State Online Query system to obtain date of death information. According to SSA in its written response to the draft report, the source for the State Online Query system data used by Arizona is the SSA DMF. AHCCCS also states that the findings of our report do not reflect the current eligibility-screening process in Arizona. We acknowledge the limitations stemming from the age of the MSIS data (fiscal year 2011) and the passage of PPACA in 2011. Furthermore, we directly address this limitation in the report where we discuss actions CMS has taken to strengthen certain Medicaid enrollment-screening controls. Specifically, we state that CMS has taken regulatory action since 2011 to enhance beneficiary-screening procedures and provider-screening procedures that may address the improper-payment indicators found in our report. We then discuss the current eligibility-screening process at the federal and state level. We did not make any changes to the report based on these AHCCCS comments, because we believe the essence of the comments was already acknowledged within the report. AHCCCS also provided comments on specific sections of our analysis, beginning with incarcerated beneficiaries. First, AHCCCS identified reliability and timeliness issues with the SSA incarceration file. This comment is not pertinent to our work, as this file was not a data source used in our analysis. As we note in appendix I, we used each state’s department of corrections prisoner database for individuals incarcerated for any period during fiscal year 2011. Second, AHCCCS states that we failed to distinguish whether incarcerated individuals were hospitalized. To the contrary, we note that we reviewed these claims’ type of service to determine that none qualified for federal matching funds. Accordingly, this would exclude individuals that were hospitalized. Regarding our analysis using the USPS address-management tool, AHCCCS incorrectly states that our report assumes that all physical addresses are known to USPS. We do not state this in our report. Specifically, the report notes that federal law requires states to make Medicaid available to eligible individuals who do not reside in a permanent dwelling or do not have a fixed home or mailing address. Therefore, there are no requirements related to listing actual physical addresses for beneficiary enrollment and eligibility determinations. Further, the focus of our analysis was CMRAs used as the residential address, not the validity of all addresses listed on beneficiary applications. As such, the comment from AHCCCS is not supported by the actual content and analyses in our report. AHCCCS notes that our analysis of provider controls is an extrapolation from the combined set of states’ data. This is incorrect. Our report does not extrapolate, or make any population estimates, of provider eligibility fraud. We provided a descriptive analysis of potential improper payments and provider-eligibility fraud based on the data from fiscal year 2011. As stated earlier, we listed the appropriate caveats to our findings to ensure that the results of our analysis were not taken in an inappropriate context, as implied by AHCCCS. Finally, AHCCCS identified three recommendations that it believes would improve Medicaid program-integrity issues. Specifically AHCCCS stated CMS should allow states to use disclosures conducted by Medicare or another state Medicaid program in the enrollment of Medicaid providers, allow states to access the federal criminal database to conduct initial and periodic background checks on providers, and promote other national initiatives for data sharing on Medicare and provider license verifications. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Commissioner of Social Security, relevant state agencies, and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. In this report, we (1) identify and analyze indicators of improper or potentially fraudulent payments to Medicaid beneficiaries and providers and (2) examine the extent to which federal and state oversight policies, controls, and processes are in place to prevent and detect fraud and abuse in determining eligibility for Medicaid beneficiaries and enrolling providers. To identify indicators of improper or potentially fraudulent payments to Medicaid beneficiaries and providers, we obtained and analyzed Medicaid claims paid in fiscal year 2011, the most-recent consistently comparable data, for four states: Arizona, Florida, Michigan, and New Jersey. Medicaid payments to these states constituted about 13 percent of all Medicaid payments made during fiscal year 2011. These four states were selected primarily because they had reliable data and were among states with the highest Medicaid enrollment. The results of our analysis of these states cannot be generalized to other states. We obtained Centers for Medicare & Medicaid Services (CMS) Medicaid Statistical Information System (MSIS) beneficiary, provider, and other services claims data, as well as state Medicaid Management Information System (MMIS) claims identification data to perform our work. Managed-care organizations As a result, the Medicaid (MCO) receive a monthly capitated payment.paid amounts associated with managed care may not be reflected in the state claims that were submitted to CMS for medical services, and hence our estimate is likely understated. All of the states included in our review—Arizona, Florida, Michigan, and New Jersey—had MCO arrangements in place. To identify beneficiaries that submitted applications with identification information (name, date of birth, and Social Security number ) that did not match with Social Security Administration (SSA) records, we used the SSA Enumeration Verification System. Specifically, we processed unique beneficiary identification information from the MSIS and MMIS files through the SSA Enumeration Verification System to determine the extent to which SSN information in the MSIS files was accurate. We analyzed the output codes from the SSA Enumeration Verification System to identify unique individuals who had Medicaid application identification information that did not match SSA records. Applications may have inaccuracies due to simple errors such as inaccurate data entry or incomplete sections, making it difficult to determine whether these cases involve fraud through data matching alone. In addition, there may be situations where an individual does not have an SSN (for example, a newborn child). Nonetheless, these applications pose a higher risk of fraud because there is no complete electronic record of beneficiaries’ identities. To identify providers and beneficiaries with identities associated with deceased individuals at the time of their Medicaid services, we matched Medicaid data—MMIS and MSIS—to the SSA complete file of death information from October 2012. We matched records using the SSN and full name of the individual. We then identified unique individuals who had Medicaid claims processed where the date of death in the SSA file occurred before the beginning service date in the Medicaid claims file. To identify providers and beneficiaries with identities associated with incarcerated individuals at the time of their Medicaid services, we matched our selected states’ MMIS data to the states’ departments of corrections prisoner databases. Prisoner data included individuals incarcerated for any period during fiscal year 2011. For Arizona, Florida, and New Jersey, we identified provider and beneficiary records for which the Medicaid SSN and names matched that of a person who was incarcerated in fiscal year 2011 in any of the four states. Michigan did not provide SSNs in its incarceration data. For Michigan, we identified provider and beneficiary records for which the Medicaid name and birth day exactly matched that of a person who was incarcerated in fiscal year 2011 in any of the four states. We then identified Medicaid claims associated with the identified individuals by matching to the MSIS data. We compared the beginning service date of the claims to the individual’s admittance and release date to identify all claims that occurred while the associated beneficiary or provider identity was incarcerated. Additionally, we reviewed these claims’ type of service to determine that none qualified for federal matching funds. It is not possible to determine from data matching alone whether these matches definitively identify recipients who were deceased or incarcerated without reviewing the facts and circumstances of each case. For example, it is possible that individuals can be erroneously listed in the full Death Master File (DMF). Similarly, a provider or beneficiary may have an SSN, name, and date of birth similar to an individual in state prison records. Alternatively, our matches may also understate the number of deceased or incarcerated individuals receiving assistance because matching would not detect applicants whose identifying information in the Medicaid data differed slightly from their identifying information in other databases. To identify claims that are associated with missing or invalid addresses, we used the United States Postal Service (USPS) Address Matching System Application Programming Interface (USPS address-management tool). To identify providers and beneficiaries with invalid addresses, we submitted all MMIS data through that USPS address-management tool for fiscal year 2014. The USPS address-management tool provides information such as whether an address is undeliverable, unknown, a Commercial Mail Receiving Agency (CMRA), or contains an invalid city, state, or ZIP code. Additionally, the address-management tool standardized and corrected addresses based on the information submitted. We considered invalid addresses to be unknown/blank, CMRAs, or foreign addresses. To identify providers with CMRAs, we identified all records where the address-management tool identified and confirmed the address with private-mailbox-number information. We conducted further analysis to remove any provider records that were not for the physical service location of their business, such as a billing or correspondence address for a provider. To identify beneficiaries with commercial addresses, we identified all records where the address- management tool identified the residential address as a commercial address with or without private-mailbox-number information. To identify providers and beneficiaries with unknown addresses, we identified all records where the USPS address-management tool identified the address as not found or blank. To identify providers and beneficiaries with foreign addresses, we identified and reviewed all records where the USPS address-management tool identified the address as having an invalid city or state. We removed records that had been corrected by the USPS address-management tool as well as military bases. We then conducted additional analysis to identify MSIS claims associated with both the providers and beneficiaries with invalid addresses. It is not possible to determine through data matching alone whether the identified claims were definitely associated with invalid addresses without reviewing additional information for each claim due to the difference in MMIS and address-management tool data age. For example, it is possible that an address was valid in fiscal year 2011 and was no longer recognized in fiscal year 2014. To identify Medicaid beneficiaries who received benefits in two or more states concurrently, we identified all beneficiary SSNs that appeared in two or more states’ MMIS data in fiscal year 2011. We then found all claims associated with the beneficiary identities. We conducted further analysis to determine the states in which each beneficiary identity appears and the service ranges—first and last date of service—for those states. We defined a concurrent claim as a claim that occurred within the service range of a second state for the same beneficiary identity. For each claim, we compared its date of service to the service ranges for the beneficiary identity to determine whether it was a concurrent claim. It is not possible to definitely say through data matching alone that a beneficiary was improperly receiving Medicaid benefits in two or more states concurrently without looking into further information for each claim and beneficiary. For example, a beneficiary could have been a resident in one state and received services, then changed residency to a second state and received benefits for a brief period, before finally relocating again back to the original state and receiving additional services. In this case, the claims could have been identified as a concurrent claim even if the beneficiary did not receive any services from the original state during his or her relocation period in the second state. To identify claims that might have been improperly processed and paid by the Medicaid program because the federal government had excluded these providers from providing services to Medicaid beneficiaries, we compared the Medicaid claims to the exclusion and debarment files from the Department of Health and Human Services’ (HHS) Office of Inspector General (OIG) and the General Services Administration (GSA). Specifically, we used the HHS List of Excluded Individuals and Entities (LEIE) file from September 2012 and the GSA Excluded Parties List System (EPLS) database extract from October 2011 to perform our match. We matched MMIS and MSIS Medicaid data using SSN and individual name with both the LEIE and the EPLS data extracts. We then identified unique individuals who had Medicaid claims processed where the date of exclusion occurred before the beginning service date in the Medicaid claims file. To identify claims that might be improperly processed and paid by the Medicaid program because the provider had a revoked or suspended license, we compared Medicaid claims data to the Federation of State Medical Boards (FSMB) Physician Data Center database extract from calendar year 2014. We identified providers with actions that, in some cases, may be prohibited under federal Medicaid regulations that resulted in a suspended or revoked license. We matched these providers with our Medicaid claims data by SSN and provider name. We identified unique individuals who had Medicaid claims processed where the date of license action occurred before the beginning service date in the Medicaid claims file. To identify federal and state oversight policies, controls, and processes in place to prevent and detect fraud and abuse in determining eligibility for Medicaid beneficiaries and enrolling providers, we reviewed federal statutes, CMS regulations, and state Medicaid policies pertinent to program-integrity structures, met with agency officials, and visited state Medicaid offices that perform oversight functions. We used federal standards for internal control, GAO’s Fraud Prevention Framework,federal statutes, and Medicaid eligibility regulations to evaluate these functions. To determine the reliability of the data used in our analysis, we performed electronic testing to determine the validity of specific data elements in the federal and selected states’ databases that we used to perform our work. We also interviewed officials responsible for their respective databases, and reviewed documentation related to the databases and literature related to the quality of the data. On the basis of our discussions with agency officials and our own testing, we concluded that the data elements used for this report were sufficiently reliable for our purposes. We identified criteria for Medicaid fraud controls by examining federal and state policies, laws, and guidance, including policy memos and manuals. We interviewed officials from CMS and the state governments of Arizona, Florida, Michigan, and New Jersey involved in Medicaid program administration and Medicaid fraud response. We conducted this performance audit from March 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our audit findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Matthew Valenta (Assistant Director), John Ahern, Mariana Calderón, Melinda Cordero, Julia DiPonio, Lorraine Ettaro, Colin Fallon, Barbara Lewis, Maria McMullen, Kevin Metcalfe, Rubén Montes de Oca, James Murphy, Christine San, Paola Tena, and Carolyn Yocom made key contributions to this report.
Medicaid is a significant expenditure for the federal government and the states, with total federal outlays of $310 billion in fiscal year 2014. CMS reported an estimated $17.5 billion in potentially improper payments for the Medicaid program in 2014. GAO was asked to review beneficiary and provider enrollment-integrity efforts at selected states. This report (1) identifies and analyzes indicators of improper or potentially fraudulent payments in fiscal year 2011, and (2) examines the extent to which federal and state oversight policies, controls, and processes are in place to prevent and detect fraud and abuse in determining eligibility. GAO analyzed Medicaid claims paid in fiscal year 2011, the most-recent reliable data available, for four states: Arizona, Florida, Michigan, and New Jersey. These states were chosen because they were among those with the highest Medicaid enrollment; the results are not generalizable to all states. GAO performed data matching with various databases to identify indicators of potential fraud, reviewed CMS and state Medicaid program-integrity policies, and interviewed CMS and state officials performing oversight functions. GAO found thousands of Medicaid beneficiaries and hundreds of providers involved in potential improper or fraudulent payments during fiscal year 2011—the most-recent year for which reliable data were available in four selected states: Arizona, Florida, Michigan, and New Jersey. These states had about 9.2 million beneficiaries and accounted for 13 percent of all fiscal year 2011 Medicaid payments. Specifically: About 8,600 beneficiaries had payments made on their behalf concurrently by two or more of GAO's selected states totaling at least $18.3 million. The identities of about 200 deceased beneficiaries received about $9.6 million in Medicaid benefits subsequent to the beneficiary's death. About 50 providers were excluded from federal health-care programs, including Medicaid, for a variety of reasons that include patient abuse or neglect, fraud, theft, bribery, or tax evasion. Since 2011, the Centers for Medicare & Medicaid Services (CMS) has taken regulatory steps to make the Medicaid enrollment process more rigorous and data-driven; however, gaps in beneficiary-eligibility verification guidance and data sharing continue to exist. These gaps include the following: In October 2013, CMS required states to use electronic data maintained by the federal government in its Data Services Hub (hub) to verify beneficiary eligibility. According to CMS, the hub can verify key application information, including state residency, incarceration status, and immigration status. However, additional guidance from CMS to states might further enhance program-integrity efforts beyond using the hub. Specifically, CMS regulations do not require states to periodically review Medicaid beneficiary files for deceased individuals more frequently than annually, nor specify whether states should consider using the more-comprehensive Social Security Administration Death Master File in conjunction with state-reported death data when doing so. As a result, states may not be able to detect individuals that have moved to and died in other states, or prevent the payment of potentially fraudulent benefits to individuals using these identities. In 2011, CMS issued regulations to strengthen Medicaid provider-enrollment screening. For example, CMS now requires states to screen providers and suppliers to ensure they have active licenses in the state where they provide Medicaid services. CMS's regulations also allow states to use Medicare's enrollment database—the Provider Enrollment, Chain and Ownership System (PECOS)—to screen Medicaid providers so that duplication of effort is reduced. In April 2012, CMS gave each state manual access to certain information in PECOS. However, none of the four states GAO interviewed used PECOS to screen all Medicaid providers because of the manual process. In October 2013, CMS began providing interested states access to a monthly file containing basic enrollment information that could be used for automated screening, but CMS has not provided full access to all PECOS information, such as ownership information, that states report are needed to effectively and efficiently process Medicaid provider applications. GAO recommends that CMS issue guidance for screening deceased beneficiaries and supply more-complete data for screening Medicaid providers. The agency concurred with both of the recommendations and stated it would provide state-specific guidance to address them.
The U.S. airline industry is vital to the U.S. economy. Airlines directly generate billions of dollars in revenues each year and contribute to the economic health of the nation. Large and small communities rely on airlines to help connect them to the national transportation system. To operate as an airline carrying passengers or cargo for hire or compensation, a business must have an air carrier (airline) operating certificate issued by the Federal Aviation Administration (FAA), based on federal aviation regulations. Certification is determined by the type of commercial service being provided. Airlines that provide scheduled commercial service operate in accordance with Part 121 of Title 14 of the Code of Federal Regulations (CFR)categories: mainline and regional. Mainline airlines include (1) passenger service providers, such as American and Delta that offer domestic and and are often grouped into two international passenger service on larger airplanes, and (2) cargo service providers, such as United Parcel Service and Federal Express, that offer domestic and international cargo service. Regional airlines include (1) passenger service providers, such as SkyWest and ExpressJet, that offer domestic and limited international passenger service, generally using airplanes with fewer than 90 seats and transporting passengers between large hub airports and smaller airports, and (2) cargo service providers, such as ABX Air and Kalitta Air, that provide domestic and limited international cargo service on a charter or contract basis. Regional airlines generally provide service to smaller communities under capacity purchase agreements with mainline airlines, operate about half of all domestic flights, and carry about 22 percent of all airline passengers. At the end of fiscal year 2012, according to FAA, the U.S. commercial airline industry consisted of 15 scheduled mainline airlines and 70 regional airlines. According to available data, there were over 72,000 airline pilots employed nationwide in 2012. In addition to mainline and regional airlines, other smaller, commercial air-service providers offer scheduled and unscheduled service, via commuter or on-demand operations, and operate in accordance with Part 135 of Title 14 of the CFR. It takes many years of training and significant financial resources to meet FAA’s certification and aeronautical experience qualifications to become an airline pilot. FAA issues several types of pilot certificates that airline pilots progress through—including student pilot, private, commercial, and airline transport pilot (ATP). Federal aviation regulations establish the core requirements for each pilot certification, including the eligibility requirements, aeronautical knowledge, aeronautical experience, and flight proficiency standards. Regulations also govern what pilots with each certificate can do. For example, a private pilot certificate allows pilots to fly solo or carry passengers in any aircraft for which they are qualified, but not to fly for compensation; a commercial pilot certificate is necessary for a variety of non-airline pilot jobs. The ATP certificate is the highest level of pilot certification, requires the highest amount of cumulative flight time and is necessary to fly as a captain or first officer for an airline. Airline pilots are mostly trained through FAA-certified pilot schools at a college or university—typically through 2- and 4-year degree programs— non-collegiate vocational schools, or in the military. Outside of military training, where service members receive compensation while training to become a pilot, costs can vary significantly for individuals wishing to become a pilot depending on the number of certificates and ratings they wish to attain and the school or training program they choose. Generally, costs to attain a private pilot certificate averages about $9,500, according to the University Aviation Association. However, the academic education and flight training from a 4-year aviation degree program to obtain up to a commercial pilot certificate with additional ratings necessary to be hired as a pilot for commercial flying can cost well in excess of $100,000. Pilot students generally do not come out of collegiate and vocational pilot schools with the necessary requisites to attain an ATP certificate. Individuals will typically graduate from these schools with a commercial pilot certificate, and then they must gain experience by accumulating flight time and pass additional certification testing to obtain an ATP certificate. Similarly, upon separation from the military, military pilots would have to meet the same flight time requirements and pass the certification tests as a civilian pilot would in order to obtain an ATP certificate, although they may be able to use their military flight time to meet those requirements. Until recently, regional and mainline airlines were permitted to hire first officers who had obtained a commercial pilot certificate which, among other things, required a minimum of 250 hours of flight time. However, following the 2009 Colgan Air, Inc. crash, in New York, the Airline Safety and Federal Aviation Administration Extension Act of 2010 mandated that FAA further limit the hours of pilot flight and duty time to combat problems related to pilot fatigue and increase training requirements and pilot qualifications for first officers. In January 2012, FAA issued a rule mandating that pilots have certain rest periods between flights and limiting the number of consecutive hours a pilot may fly. This rule became effective as of January 2014. In July 2013, FAA, as required by the law, issued a new pilot qualification rule that increased the requirements for first officers who can fly for U.S. passenger and cargo airlines. The rule requires that first officers now hold an ATP certificate, just as captains must hold, requiring, among other things, a minimum of 1,500 hours of total time as a pilot. The law also gave FAA discretion to allow specific academic training courses to be credited toward the required hours of total time as a pilot. As such, the rule included an allowance for pilots with fewer than 1,500 hours of total time as a pilot to obtain a “restricted-privileges” ATP certificate (R-ATP)—that is, to allow pilots to serve as first officers until they obtain the necessary 1,500 hours of total time as a pilot needed for an ATP certificate—when they meet certain requirements1. former military pilots with 750 hours of total time as a pilot; 2. graduates of approved 4-year aviation degree programs with 1,000 hours of total time as a pilot and meet other requirements;3. graduates of approved 2-year aviation degree programs with 1,250 hours of total time as a pilot and meet other requirements. As of January 24, 2014, 37 collegiate 2- and 4-year aviation degree programs have been authorized to certify graduates to be eligible to apply for an R-ATP certificate. In order to qualify for the R-ATP with a minimum of 1,250 hours of total time as a pilot, the graduating pilot must hold a bachelor’s or an associate’s degree with an aviation major and complete 30 aviation semester credit hours, who also receives a commercial pilot certificate and instrument rating, from an institution of higher education that has been recognized by the FAA Administrator as coursework designed to improve and enhance the knowledge and skills of a person seeking a career as a professional pilot. 14 CFR 61.160(c). Options available to these pilots to build the necessary Obtain a certified flight instructor (CFI) certificate which allows pilots to accrue flight hours while instructing new student pilots. Become employed with Part 135 air service providers (i.e., commuter and on-demand, or non-Part 121 cargo operations) as a first officer, where a commercial pilot certificate (minimum 250 hours) is required, among other requirements. Become employed performing Part 91 operations—such as banner towing, crop dusting, and corporate flights. Pay for flight time such as renting aircraft for flying or training in a flight simulation training device. Work abroad for foreign airlines, or join the U.S. military and be trained as a pilot. Several federal agencies have a role in supporting and developing the pilot workforce. As mentioned, FAA is responsible for the administration of pilot certification (licensing), among other things, and DOD, the Department of Veterans Affairs (VA), DOL and its Employment and Training Administration (ETA), and Education each have a role that may contribute to the availability of airline pilots (see table 1). The holder of a valid flight instructor certificate may provide pilot training and instruction for pilot certification in any aircraft for which they are qualified. 14 C.F.R. § 61.183. Part 61 allows for a maximum of 25 hours of training in a full flight simulator representing a multiengine airplane to be credited toward the flight time requirement for an ATP certificate if the training was accomplished as part of a FAA-approved training course. 14 C.F.R. § 61.159(a)(3). In addition, no more than 100 hours of the total non-airplane time towards the total time requirement for an ATP certificate which may be obtained in a full flight simulator or flight training device provided the device represents an airplane and the aeronautical experience was accomplished as part of a FAA-approved training course. 14 C.F.R. § 61.159(a)(5). Historical labor market data from 2000 through 2012 provide mixed evidence as to whether an airline pilot shortage exists. The unemployment rate for the pilot occupation—a key indicator for a shortage—has been much lower than for the economy as a whole, which is consistent with a shortage. On the other hand, wage earnings and employment were not consistent with the existence of a shortage, as data for both indicators showed decreases over the period. In looking forward, to meet the expectation of growth in the industry and to replace expected mandatory age-related pilot retirements, projections indicate the industry will need to hire a few thousand pilots on average each year over the next 10 years. Data indicate that a large pool of qualified pilots exists relative to the projected demand, but whether such pilots are willing or available to work at wages being offered is unknown. Furthermore, the number of pilot certificate holders has not been increasing, and fewer students are entering and completing collegiate pilot training programs. Studies and analyses related to the supply of airline pilots find that a shortage may arise depending on several factors, including the extent of future industry growth, the wages being offered, and escalation in education costs. As airlines have started hiring to address growth demands and attrition, 11 of the 12 regional airlines we interviewed reported difficulties filling entry- level first-officer vacancies. Mainline airlines, since they hire experienced pilots largely from regional airlines, have not reported similar difficulties, although mainline airline representatives expressed concerns that entry- level hiring problems could affect the ability of their regional partners to provide service to some locations. While no single metric can be used to identify whether a labor shortage exists, labor market data can be used as “indicators,” in conjunction with observations from stakeholders. According to economic literature, one can look at historical unemployment rates, as well as trends in employment and earnings.expect (1) a low unemployment rate signaling limited availability of workers in a profession, (2) increases in employment due to increased If a labor shortage were to exist, one would demand for that occupation, and (3) increases in wages offered to draw more people into the industry. Of these three indicators, the unemployment rate provides the most direct measure of a labor shortage because it estimates the number of people who are unemployed and actively looking for work in a specific occupation. The BLS household survey-based CPS data used to evaluate these three indicators combined airline and commercial pilots into a single occupational category of pilots; therefore, we cannot isolate the extent to which the indicators apply to only airline pilots, although airline pilots represent about two-thirds of the employment within the occupation. According to BLS data we analyzed from 2000 through 2012, the unemployment rate of pilots has averaged 2.7 percent—a much lower unemployment rate than for the economy as a whole. This level of unemployment would be consistent with a shortage because it suggests few pilots during this time frame reported that they were looking for employment as a pilot and were unable to find it. Furthermore, in relative terms, over the entire period, the pilot occupation had the 53rd lowest unemployment rate out of the 295 occupations for which annual BLS data is available. Data on the other two indicators, wage earnings and employment growth, are not consistent with the existence of a shortage in the occupation. First, our analysis of BLS data from 2000 through 2012 shows that the median weekly earnings in the pilot occupation decreased by 9.5 percent over the period (adjusted for inflation), or by an average of 0.8 percent per year. According to economic literature, a positive growth in wages is required for a shortage to be present. So, by absolute standards, the findings for this indicator do not appear consistent with a shortage for pilots during the time frame. We also compared wages in this occupation to all other occupations and found wage growth for pilots has been low compared to other occupations. Specifically, the pilot occupation would be 187th out of the 250 occupations for which annual data are available. However, other factors can account for a decline or lack of growth in earnings even during a labor shortage. Earnings may be slow to adjust to other labor market trends, or certain aspects of an industry may prevent wages from increasing. For example, airlines may have limited flexibility to adjust wages for entry-level positions in response to a potential shortage due to seniority-based airlines’ pay systems in place for pilots and because airlines’ pilot wages are often negotiated contractually with labor unions. Second, for the rate of employment growth, our analysis showed employment for pilots has actually decreased by 12 percent from 2000 to 2012, a decrease that is also not consistent with a shortage. As previously stated, the airline industry has experienced considerable volatility over the last decade due to recessions, bankruptcies, and merger and acquisition activities that have curtailed growth in the industry. By relative standards, the rate of employment growth for the pilot occupation ranked about 331st of the 490 occupations for which annual BLS data is available. Our analysis of labor market data has a number of limitations given the nature of the CPS and OES data from BLS and the scope of our analysis. Occupations in the SOC system are classified using occupational definitions that describe the work performed and may not take into account specific requirements an employer seeks. For example, some airlines may require specific aircraft type ratings. We identified the following other limitations of the labor market indicators: Data are collected through a household survey and are subject to sampling and response errors. Typically, one individual will identify occupation, employment, and wage data for all household members; individuals may report incorrect or inconsistent information. Survey results of unemployment rates are based on the person’s last job, rather than the longest job held or occupation in which a person is trained or looking for work; the data therefore can miss individuals who are seeking work in a particular occupation. For example, airline pilots who lost their jobs, worked temporarily in another occupation (perhaps even within aviation), but considered themselves pilots and were seeking employment as pilots when surveyed would not be counted as unemployed pilots in the CPS data; rather, they would be classified according to the occupation they had held temporarily. BLS collects data on earnings for pilots in all stages of their careers, so we could not examine whether starting earnings—which would be more likely to indicate if wages were rising to attract entry-level workers—have increased. Data are collected at a national level; while not all indicators were consistent with a labor shortage, our analysis would not identify any regional shortages. Research by BLS and others suggests job vacancy data as another potential indicator for identifying labor shortages. However, BLS does not collect information on job vacancies at the occupational level. Some job vacancy data are collected by some states and private companies, but the data are limited. We could not obtain complete and sufficiently reliable occupational-level job-vacancy data from these sources. Finally, as mentioned above, no single measure can provide definitive evidence as to whether a labor shortage exists. Rather, these data can indicate the extent to which employers may have difficulty attracting people at the current wage rate. Moreover, even if perfect data existed, the term “labor shortage” is sometimes used to describe a variety of situations, some of which are generally not considered to be shortages. For example, during periods of economic recession, employers may become accustomed to hiring a high caliber of candidate with specific training or levels of experience at a prescribed wage rate. In these cases, employers can be more selective when hiring from among the candidates for the position. However, during an economic expansion, when companies may be increasing the size of their workforce, it is likely that the number of job applicants will shrink and employers may have difficulty finding the same caliber of candidates that they could find during a downturn. Under these circumstances the employer’s challenge may become one of quality of available people, not necessarily quantity of people willing and able to do the job. Economic literature also suggests that to describe the nature and scope of any potential shortage, these indicators should be considered in conjunction with other information, such as trends in the industry that can affect the demand of and supply for qualified professionals and the hiring experiences of employers, which we discuss in the following sections. The number of pilots that U.S. airlines will need to hire will be driven by increases in passenger traffic (growth) and replacements for retiring and attriting pilots. Several reports have projected the need for pilots in the future. Audries Aircraft Analysis—an aviation industry analysis firm— developed a forecast of pilot needs over the next 10 years based on forecasts of new aircraft orders and expected deliveries from aircraft manufacturers Boeing, Airbus, and Embraer. Using industry averages for numbers of pilots needed per plane, the forecast determines how many pilots will be needed to accommodate the projected fleet growth, and couples this number with industry data regarding expected retirements. An academic study conducted by researchers from six universities, led by researchers from the University of North Dakota, forecasts the demand for pilots using similar techniques. FAA also projects the need for pilots based on forecasts of growth in passenger demand and expected retirements. While these projections are helpful in gaining a sense for potential changes in aviation employment, developing long-term occupational employment projections is inherently uncertain for a variety of reasons. Most importantly, each projection relies on a set of assumptions about the future, some of which may not come to fruition. For example, the projections discussed above relied on assumptions of continued economic growth, but if a recession or other unexpected economic event were to occur, the projections for employment are likely These projections vary in their results and based on to be overstated.those results, we estimated that a range of roughly 1,900 to 4,500 new pilots will be needed to be hired on average annually over the next 10 years, as follows: Audries Aircraft Analysis developed pilot demand forecasts based on aircraft manufacturers’ forecasts of fleet growth. Each manufacturer uses a slightly different method to create its forecasts. For example, some projections include certain cargo aircraft, and some do not. Despite the differences in methods, the fleet growth forecasts yielded similar results. Each forecast resulted in the projected need for pilots steadily rising over the next 10 years to accommodate growth and replacement of retiring and attriting pilots. Annually averaged, the Embraer forecast resulted in a projection of about 2,900 new pilots needed per year over the next decade; the Boeing forecast resulted in about 3,300 new pilots, and the Airbus forecast resulted in about 3,900 new pilots. It is important to note that these forecasts encompass the entire North American market and are not specific to the United States. In addition, the Boeing forecast projected demand for 498,000 new airline pilots worldwide over the next 20 years. The effect of this global demand for pilots may also have an effect on the available supply of pilots for U.S. airlines in the future, as foreign airlines also recruit U.S. pilots. Academic study led by the University of North Dakota estimated demand for pilots for roughly the next 20 years in its study of airline pilot labor supply. This study derived demand based on industry growth, retirements, and attrition for reasons other than retirements. Industry growth was derived from forecasts of new aircraft from the Airline Monitor and estimates of the average number of pilots needed per aircraft. Expected retirements came from industry data, and the study used an estimate of an attrition rate for reasons other than retirement of 1.5 percent. The study estimates that the industry would need to hire over 95,000 new pilots over about the next 20 years, with about 45,000 being needed in the next 10 years, for an annual average of about 4,500 over the next decade. FAA 2013 forecast projects that passenger demand for U.S. airlines over the next 20 years will grow at an average 2.2 percent per year through 2033, with slow or no growth expected in 2013 and slight growth over the next 5 years assuming the U.S. economy grows at a faster rate. To account for this industry growth and to replace retiring pilots, FAA projects that about 70,000 new pilots with an ATP certificate through 2032 will be needed.need for about 3,400 new pilots annually over the next 10 years. This equates to an average BLS Employment Projections 2012–2022 assumes a 6.6 percent net decrease in employment in the overall number of airline pilot positions through the year 2022—which equated to about 4,400 fewer pilot jobs over the time period. This is in contrast with the average expected occupational growth of 10.8 percent for all occupations for this period. Based on the employment projection, we calculated that an average of 440 pilot jobs will be lost annually through 2022. However, while fewer airline pilot jobs will exist during the 10-year period, BLS also projects, at the same time, 19,200 airline pilot job openings, or an annual average of 1,920 openings, that may be available to be filled due to retirements and attrition. The BLS employment projections assume that growth in supply will be adequate to meet the demand, and so the analysis is not designed to forecast whether a labor shortage might develop in any given occupation. In addition to the need for airlines to hire new pilots based on industry growth and replacement of retirements, FAA’s new rule on pilot flight and duty time may engender a one-time staffing adjustment for airlines. Current crew schedules can vary by airline, the labor contract involved, and the number of pilots assigned to operate each aircraft, and airlines we interviewed varied in their estimates of how many additional pilots they would be need to meet the new requirements. Airlines’ estimates ranged from no effect on the number of pilots needed to 15 percent increase in the number of pilots needed as of January 2014. While these projections suggest the need for between roughly 1,900 and 4,500 new pilots on an average annual basis over the next 10 years, we cannot indicate with any level of certainty the actual number of new airline pilots that will be needed or hired in the future. Airlines make a variety of business decisions to meet passenger demand for airlines’ operations that could affect the number of pilots that the airlines would need or are able to hire. According to information provided by eight mainline airlines, they expect to hire about 20,800 new pilots from 2014 through 2023. Accordingly, several mainline airlines have announced plans to recall all of their remaining furloughed pilots or begin new hiring efforts. For example, in September 2013, United issued recall notices to its remaining 600 furloughed pilots. According to United representatives, it has also started hiring new pilots with an initial goal of about 60 pilots a month to address the airline’s projected future needs. While American and Delta had already recalled all of their furloughed pilots, each announced plans for future hiring. In October 2013, American announced plans to hire 1,500 pilots over 5 years. Delta planned to hire 300 pilots in November 2013 and expects to hire about 50 pilots per month through September 2014. Several regional airlines we spoke to have also been actively hiring new pilots. For example, since March 2013, ExpressJet has hired from 32 to 48 pilots monthly. Also, representatives of American Eagle told us that they expect to hire an average of 250 pilots per year for the next 10 years. While there were over 72,000 airline pilots employed in 2012, FAA data show a total of 137,658 active pilots under the age of 65 who held ATP certificates, as of January 6, 2014. This large pool of ATP certificate holders, however, can include pilots who are not available for work or are not suitable or competent to act as pilots in airline operations on large jet- powered aircraft. Data were not available to determine or verify how many active ATP certificate holders were otherwise employed. The pilots not employed by airlines may also be serving as pilots in the U.S. military, employed as pilots in non-airline operations, employed by foreign airlines, employed in non-pilot jobs in the aviation industry, or working in non- aviation careers. With respect to pilots holding FAA pilot certificates and potentially working for foreign airlines, in 2012 according to FAA data, about 7,858 pilots with ATP certificates (or about 5 percent of the total number of pilots with ATP certificates) and about 15,994 pilots with commercial certificates (or about14 percent of the total number of pilots with commercial pilot certificates) are listed with a documented residence outside of the United States. In addition to ATP certificate holders, a large population of commercial pilot certificate holders with instrument ratings also exists. In 2012, for instance, a total of over 116,000 pilots held commercial pilot certificates and about 105,000 of these pilots also held an instrument rating. While not currently qualified to be airline pilots, future ATP certificate holders typically come from this pool, and the instrument ratings held by some of these individuals suggest that they may be on a pathway to qualifying for an ATP certificate. According to FAA officials, the number of pilots holding an instrument rating is a good indicator for forecasting pilots who are more likely to seek an ATP certificate because an instrument rating is a requirement of ATP certification; an instrument rating is not, however, a requirement to hold a commercial pilot certificate. While these pools of existing ATP and commercial pilot certificate holders exist, the pools have remained relatively flat since 2000 (see fig. 1). The number of pilots under age 65 holding active ATP certificates decreased about 1 percent from 2000 through 2012, while the number of new certificates issued annually decreased 17 percent during this period (7,715 to 6,396) (see fig. 2). However, new issuance of ATP certificates has increased since 2010, an increase that would be expected given that the new pilot qualification rule took effect in July 2013. Commercial pilot certificate holders under age 65 increased 4 percent from 2000 through 2012. The number of new certificates issued each year averaged about 9,900 over this time period. We note that these populations of pilots holding active commercial and ATP certificates, while currently relatively large, have been larger in the past. Also, when mainline airlines increase pilot hiring, the rate at which new pilots enter the pipeline would likely increase, as would the rate at which pilots holding commercial pilot certificates upgrade to ATP certificates. To illustrate, from 1990 through 2000, mainline airlines hired During that period, the number of ATP certificates about 31,300 pilots.held increased by roughly the same number—from 107,732 to 141,596— while the number of commercial pilot certificates held also decreased by roughly the same amount—from 149,666 to 121,858. In contrast, when hiring slowed from 2001 through 2012 and mainline airlines hired about 16,900 pilots, there was a decrease in the total number of airline pilot jobs and the number of ATP certificates held increased only slightly—from 144,702 to 145,590—while the number of commercial pilot certificates held actually decreased—from 120,502 to 116,400. The average number of new commercial pilots certificates issued each year was also lower in this period (9,780) compared to the 1990’s (11,688). The number of flight instructors is another predictor of individuals moving through the pipeline to becoming an airline pilot. The number of pilots under age 65 holding active flight instructor certificates increased 13 percent from 2000 through 2012 (see fig. 1), while the number of new flight instructor certificates issued each year averaged about 4,700 over this period and remained relatively flat (see fig. 2). Under the new pilot qualification rule, aspiring pilots must accrue more flight hours than was the case in the past, and stakeholders expect that flight instruction is likely to be one of the primary means of attaining these hours. This means that new pilot graduates who decide to work as flight instructors to gain hours will need to hold such positions for a longer period of time. If this occurs, flight instructor turnover will be slower and new pilot graduates may have more difficulty finding flight instructor positions. On the other hand, representatives of three of the pilot schools we spoke to told us that they are currently facing a shortage of qualified flight instructors. Available evidence suggests that fewer students are entering and completing pilot training since 2001. According to Education’s data, the cumulative number of graduates (completions) of undergraduate professional pilot-degree programs—those most likely to pursue a career as an airline pilot—decreased about 23 percent from academic years 2000-2001 through 2011-2012 (see fig. 3). Although data on enrollments are not available, representatives from most of the collegiate and vocational pilot schools we interviewed told us their schools have experienced declines in undergraduate enrollments over the last 10 years.collegiate vocational pilot schools reported waning interest among current and prospective students wanting to pursue professional pilot education. Further, representatives of the 10 collegiate aviation and 2 non- According to these representatives, the airline pilot career has lost some of its historical appeal for young people over the last 10 years due to a variety of factors, including increases in education costs, limited sources of financial assistance, negative perceptions of working conditions and wages for new pilots, and a perceived lack of stability in the industry. In addition, according to these representatives, the new first officer qualification requirements have also had some impact on student perceptions. The new requirements mean pilots must spend additional time accruing flight hours (i.e., 1-2 additional years) prior to being qualified to apply to an airline, during the time when new pilots may be receiving relatively low wages (for example, according to the Aircraft Owners and Pilots Association, flight instructors typically make less than $20,000 per year),and students are facing a longer period of time before they will be financially able to begin repaying their student loan debt. As a result, according to recruiters from four of the schools, students’ parents are less encouraging of the career. According to officials at three collegiate aviation schools, due to these and other factors, more students interested in working in the aviation industry are pursuing other piloting careers, such as in unmanned aircraft systems. To illustrate, the officials said that in 2012, they sampled 240 new flight instructor pilots at 17 different collegiate aviation schools and found that while 69 percent (166 instructors) responded that they initially aspired to be airline pilots when they started their pilot training education, only about 38 percent (91 instructors) had aspirations to be airline pilots after graduating from training. Representatives of 5 collegiate aviation and 2 non-collegiate vocational pilot schools also reported financial hardships for many students enrolled in pilot education. Officials representing two collegiate schools told us that based on their discussions with students dropping out of professional pilot education, the lack of financial resources or assistance is often a barrier for students. Although historically, the military has been a significant source of pilots for the airlines, according to some airline industry representatives we interviewed, the number of former military pilots being hired by airlines has been declining. According to these representatives, prior to 2001, some 70 percent of airline pilots hired came from the military, whereas currently they estimated roughly 30 percent come from the military. In addition, all of the airlines we interviewed reported that fewer candidates with military experience are applying for pilot job vacancies than has been their experience in the past. While specific data are not available on the number of pilots separating from the military who sought and gained employment at airlines, according to DOD data, from fiscal years 2001 through 2012, an average of 2,400 pilots separated from the military service branches per year. DOD expects roughly the same trend to continue into the foreseeable future, although future trends may be influenced by several factors, including financial incentives to influence pilots to stay in the military longer, civil job market opportunities, and changing post-war military missions. Once separated from the military, these pilots could choose to seek employment at an airline if FAA pilot certification requirements are met, such as flight hour minimums and other requisites, to be an airline pilot. However, we cannot determine the number of these pilots who may meet these qualifications, who would seek employment with civilian airlines after exiting from the military services, or who have the flight experience that airlines require. Brant Harrison from Audries Aircraft Analysis, Pilot Demand Projections/Analysis for the Next 10 Years Full Model, 2013; and the MITRE Corporation, Pilot Supply Outlook (2013). The academic study led by the University of North Dakota, which was discussed previously, concluded that U.S. airlines will experience a cumulative shortage of about 35,000 pilots over the next 20 years if no actions are taken by the airline industry or government. Using regression analysis, the study found that the number of new CFI certifications has a positive association with pilot hiring by mainline airlines—that is, as pilot hiring tends to increase so do new CFI certifications; however, it has a negative association with the cost of pilot school—that is, as educational costs increase, new CFI certifications tend to decrease. Because of the significant finding of a potential shortage, we reviewed the study’s methodology. We also replicated the study’s analysis to better understand how the study’s key assumptions affected its results. We found that the study’s findings of a shortage were based on expectations of hiring needs of mainline airlines of about 95,000 pilots over the next 20 years, and the supply of new pilots being curtailed by the continued acceleration in the cost of training, relative to the general rate of inflation. To predict future excess cost growth (the increase in the cost of pilot training over and above the general economy-wide level of inflation), the study extrapolated the growth of inflation in the cost of flight training over the past several years to the next 20 years. While using historic trends to predict future changes is part of forecasting, in some cases, it can lead to results that may be unlikely. In this case, this method resulted in forecasted year-over-year changes in the cost of flight school of almost 8 percent above its historic mean by the year 2030, which is well above historic averages over the past 20 years. However, other changes in the market for pilot training, such as the openings of other pilot schools, for example, could reduce this inflation. Using a different assumption regarding increases in training costs would result in different outcomes with respect to the size of the forecasted shortage. In fact, guidance from the Office of Management and Budget suggests that assumptions regarding price increases (such as the continuation of current trends) should be varied to test the sensitivity of the final results to that assumption. For example, we found that reducing the assumed rate of increase of inflation in the cost of flight training to only 1-2 points above its historic mean resulted in about 30,000 more CFI certifications—largely ameliorating the estimated shortage. However, the researchers stated that they felt that extrapolating from current trends would be the most responsible forecast to consider but agreed that if the costs of training do not continue to increase at an escalating rate, relative to inflation, as the study forecasted, then the estimated shortage of pilots could be mitigated. Representatives at 11 of the 12 regional airlines told us they have been unable to meet hiring targets for training classes for new-hire first officers; most reported since early 2013. In anticipation of the August 2013 deadline for the new pilot qualification rule, officials at many of these airlines indicated that 6 to 12 months before this deadline, they began seeking new pilots to hire who already had an ATP certificate or had enough flight hours that additional flying would allow them to reach the minimum to qualify for an ATP certificate by the time FAA finalized the rule. However, representatives of 5 regional airlines indicated they have generally been able to meet about 50 percent of their hiring targets to fill training classes. For example, one regional airline representative told us that his airline had monthly targets of hiring 12 new pilots from August through October 2013 but has been able to hire from 2 to 6 qualified applicants each month. Representatives of most of the regional airlines also reported that their existing banks of qualified pilot applicants have dwindled and that they receive fewer applicants than they have had historically in response to hiring announcements. Representatives of one regional airline estimated that where they may have previously had over 1,000 applicants for hiring announcements, they are now seeing about 100. For the most part, as a result of the new pilot qualification rule, many of the representatives attributed this reduction in the number of applicants to a couple of factors. First, fewer overall number of applicants are available who can meet ATP requirements. Second, according to several of the representatives, pilots completing training from pilot schools must now spend more time accruing required flight time—and forego some potential career earnings—before they can apply for entry-level first- officer jobs at regional airlines, and fewer jobs are available in general aviation and non-airline commercial sectors for pilots to accrue the needed flight hours. Additionally, representatives of 6 regional airlines noted that increasing numbers of applicants were not showing up for scheduled interviews; some of the representatives speculated that this might be due to opportunities at other regional airlines or other jobs. Representatives at 10 of the 12 regional airlines we interviewed told us they have also observed an overall decline in the quality of flight experience of qualified pilots applying for pilot jobs, while some cited higher drop-out rates among new hire classes or observed that new hire candidates seem to be less prepared for the airline environment, compared to the historic norm. Prior to the new pilot qualification rule, regional airlines would often hire entry-level pilots who had recently graduated from pilot training with a commercial pilot certificate and an instrument rating, and had gained between 500 and 700 hours of flight The pilot would time in commercial operations or in flight instruction.then be hired at the regional airline, enter training with the airline, and accrue flight time experience towards an ATP certificate in the airline environment. According to representatives from most of the regional airlines, as a result of the new pilot qualification rule, future applicants will have had to accrue an additional 500 to 750 hours of flight time in flight instruction, where they are not always actually flying a plane, or operating in the general aviation (Part 91) environment wherein flight time is accrued in aircraft such as small, single- and multiengine, propeller airplanes that are not as technically advanced as aircraft operated by airlines. According to these representatives, in their experience, applicants with the greater number of flight hours earned outside the airline environment were less proficient and prepared than previous applicants who had recently completed pilot training with between 500 and 700 hours of flight time. While this has been the recent experience of some regional airlines, we do not have data on where aspiring airline pilots are gaining their flight experience, or empirical evidence regarding how this has changed since the new pilot qualification rule went into effect. Furthermore, judgment on what type of flight experience is most suitable for would-be airline pilots is outside the scope of this report. Representatives at most of the regional airlines also noted that some of the difficulty in finding sufficient numbers of pilots with ATP certificates, being experienced by some regional airlines, could be influenced by current perceptions about the potential for career opportunities and progression. Key factors that influence pilots to pursue a job with an airline include the opportunity for upgrading to a captain, type of equipment flown, and work schedule. Pilots’ pay rates at airlines are based on seniority with a particular airline, and the rates increase each year and when pilots progress from first officer to captain. According to available data for 14 regional airlines, the average new hire hourly wage for all airplane types is currently about $24 per hour for the first year of employment. However, representatives of most of the regional airlines said the hourly wages increase for the second year of employment for first officers—to about $30, according to the data for the 14 airlines. Regional airlines generally tend to have newer pilots who accumulate flight time in smaller aircraft and use that experience as a stepping stone to the higher wages offered at mainline airlines. According to FAA, the reason that regional airline first officers are willing to accept a relatively low initial salary is because of the increases in salary that come later in the career, when they advance sequentially to regional airline captain, mainline airline first officer, and, finally, mainline airline captain.officer to captain is 5 years for regional airlines, but representatives of several regional airlines said they expected upgrades to take longer. In addition, the new pilot qualification rule have extended the period before a pilot can be hired by an airline. Therefore, individuals interested in an airline pilot career would likely expect several more years at the lower end of the pay scale than had been the case in the past. Several industry representatives also noted, however, that the potential career earnings for an airline pilot continue to be significant. Some senior captains at mainline airlines can make $200,000 or more annually in base salary. The average number of years to upgrade from first Pilot pay rates are also based on the type of aircraft that airlines fly because higher pay rates are associated with flying larger, more complex airplanes, and, thus, opportunities to eventually upgrade to flying these airplanes are important in progressing in the career. Representatives of 6 of the 12 regional airlines generally said that young, entry-level pilots have tended to favor the airlines that operate larger regional jet airplanes as opposed to those that operate turboprop-powered airplanes. Therefore, according to one regional airline, it could be difficult at times for some regional airlines to find pilots to hire as first officers willing to fly, for example, small turboprop airplanes when other opportunities are available with other airlines to immediately or eventually fly larger regional jets due to the career opportunities and associated higher pay rates. According to two small regional airlines—those that generally operate small turboprop airplanes—previous to the new pilot qualification rule, they were able to attract sufficient numbers of pilots with an expectation that these pilots would build flight experience over several years and eventually leave for other airline opportunities. However, since the rule went into effect, small regional airlines of this size cannot compete for the available pilots with ATP certificates. Due to issues in finding enough pilots with ATP certificates, one of these small regional airlines has petitioned FAA for approval that would allow it to use some of its smaller 19-seat airplanes under a Part 135 operation—which would not be subject to the new first-officer qualification requirement to have an ATP certificate—on specific routes. According to the representatives of the mainline airlines we spoke with, they are not currently experiencing any difficulty in attracting qualified and desirable candidates. These representatives generally credited higher pay and benefits, better retirement options, and more flexible work schedules than what regional airlines typically offer. For instance, the average hourly wage for first officers at 10 mainline airlines for all airplane types, for which an ATP certificate is required, is currently about $48 per hour for the first year of employment. The mainline airline representatives did not anticipate any problems as they seek to increase hiring in the future and stated that they could draw from the pool of pilots now employed at regional airlines. However, representatives did express concerns that their regional partners may be experiencing difficulties finding qualified entry-level pilots. Representatives at two mainline airlines were concerned that as they pull pilots from the ranks of their regional partners, the regional partners may have trouble replacing those pilots, a potential chain reaction that might result in regional connecting services’ being curtailed. Five regional airlines we interviewed are currently limiting service to some smaller communities because they did not have pilots available to provide that service. Other industry stakeholders expressed similar concerns that service to small communities will continue to suffer going forward. Economic literature identifies possible actions that employers in a market may take to mitigate a labor shortage. Some of the actions discussed in economic literature are already occurring as part of airline, collegiate pilot school, and government efforts to attract more pilots to the airline industry, including increased recruiting and financial incentives. However, such actions have associated costs and can affect the industry in various ways. Federal agencies have several programs aimed at promoting aviation careers and providing financial assistance for education. However, stakeholders suggested several additional actions that government could take to increase the availability and flexibility of financial assistance available to pilot students and to create additional pathways to becoming an airline pilot. According to economic literature we reviewed, employers—which are the first to identify a shortage when they encounter difficulty filling vacancies at the current wage rate—may take several actions in response to a perceived labor shortage. The actions vary in desirability for the employer based on resources required and their permanency. For example, increasing recruiting requires fewer resources than raising wages; further recruitment efforts could also be halted if labor market conditions change, whereas wages, once raised, may not be easily lowered. Employers may also choose to take some of these actions for reasons other than filling vacancies—for instance, to improve morale among current employees or to increase profitability. Some of the actions suggested in the literature are not feasible for airlines to take with respect to pilots. In response to difficulties filling employment vacancies, employers may: Increase recruiting efforts. This includes such activities as increasing advertising, using public or private employment agencies, and paying recruiting bonuses to employees who refer new hires. Train workers for the job. In a difficult labor market, an employer that traditionally relied upon colleges or vocational or trade schools to train its workforce may choose to offer or sponsor training. Improve working conditions. Equipment or facility upgrades, training, and job recognition efforts may all be effective means to attract and retain personnel. Reduce the minimum qualifications for the job. Employers may have set minimum qualifications higher than necessary and may choose to reduce those qualifications when hiring becomes difficult. As discussed, regulation sets minimum qualifications for airline pilots. However, most regional and mainline airlines could have hiring requirements in excess of or addition to the regulatory minimums that could be reduced, although airlines with such requirements are often not willing to do so because they view their requirements as important to the safe operation of their airline. Offer bonuses to new employees. Employers may offer new employees a “signing” bonus such as a cash payment or an agreement to cover the new employee’s moving expenses. Improve wages and fringe benefits. Increasing wages will help increase the number of personnel willing to work in a particular position or occupation. However, employers are reluctant to do this because they may be forced to raise the wages of current employees as well. Further, unlike some other actions, once wages are raised, it is unlikely that they will be reduced later if hiring becomes less difficult. Contract out the work. If employers cannot fill vacancies for employees in certain occupations, they may contract out those tasks to another company. Turn down work. If an employer has exhausted other means to mitigate its hiring challenges and vacancies persist, it may choose to turn down work or curtail services. Airlines and pilot schools have used a number of these strategies to attract more potential individuals to a career as an airline pilot. Economic literature suggests that increased recruiting is a logical first step to fill vacancies because it requires relatively fewer resources to implement than other potential options for attracting more interest in an occupation experiencing a shortage. Most of the airlines with whom we spoke reported that they have continued involvement with various recruiting activities, such as attending career events, including job fairs, and a couple of airlines reported that they had increased such activity to recruit more potential pilots. For example, representatives of one regional airline told us that after not hiring for several years and furloughing pilots, they have increased their recruiting efforts at some college aviation schools as well as Part 135 air service providers as part of their plan to begin hiring again. In addition, representatives from another airline said that they have almost doubled the size of their recruiting department to facilitate attendance at events and started to advertise new job openings— something they have not previously done. Some collegiate pilot schools have also expanded recruiting efforts to the next generation of potential future pilots. Officials at some of the collegiate pilot schools we met with had developed outreach programs focused on local elementary and high school students to build interest in aviation, which economic literature suggests could limit any future labor For example, Embry-Riddle Aeronautical University works shortages.with seven high schools that provide STEM-related courses (science, technology, engineering, and math) intended to immerse and prepare high school students in these academic areas for college as well as jobs in the aviation industry. In another example, the Metropolitan State University of Denver, which has a commercial pilot program, coordinates with other groups in Colorado to stimulate interest in careers in STEM fields from the preschool level through the graduate school level. Airlines were also looking for ways to help new pilots to gain additional flight time and training to eventually qualify for an R-ATP or ATP certificate, and some regional and mainline airlines had begun to restructure “bridge agreements” with collegiate and vocational pilot schools. Prior to the new pilot qualification rule, regional airlines would develop these arrangements with aviation schools as a way to directly recruit pilot graduates with a commercial pilot certificate and instrument rating as first officers, in which the airlines would typically lower their minimum hiring standards related to flight time and experience for desired pilots from these schools. Some regional and mainline airlines indicated that they had implemented such partnerships with pilot schools to promote greater interest in the field and provide a pathway from pilot school to employment as an airline pilot. For example, ExpressJet, a regional airline that contracts with Delta, has partnered with 11 collegiate aviation schools to offer selected students guaranteed employment at ExpressJet as a first officer and eventually a guaranteed interview at Delta Airlines once the student gains enough experience. Since implementation of new pilot qualification rule requiring all airlines’ first officers to have an ATP certificate, airlines have begun to change their bridge programs to help potential employees gain the necessary flight time and training to qualify for an ATP certificate. For example, two regional airlines are hiring pilots without an ATP certificate who are currently flight instructors. As airline employees, these pilots receive employee benefits such as medical and dental insurance, but continue instructing for a collegiate or vocational pilot school program to build flight time toward their ATP certification. Once these employees obtain an ATP certificate, they are placed into new hire classes to begin the airline’s training program for first officers. Airlines and other stakeholders told us that they are also considering other options to adjust to the new pilot qualification rules, such as exploring new pathways to becoming an airline pilot and finding ways to improve pilot training, which will be discussed later in this report. Regional airlines have started offering financial incentives to entice both graduating students and flight instructors. Offering financial incentives to new pilot hires is advantageous for airlines because it is a one-time cost and only affects the new employees hired. According to economic literature, signing bonuses are most frequently used when employers feel they are under intense pressure to fill vacancies in the short run. For example, two regional airlines that have had difficulty filling their new hire classes have started offering new-hire first officers an upfront $5,000 signing bonus, and one of these airlines also offers up to $10,000 for tuition reimbursement. However, officials of the industry association that represents these airlines told us that these efforts have essentially attracted pilot applicants away from other airlines, but they have not led to an increase in the applicant pool overall. DOD’s Service branches have taken similar actions in direct response to addressing a potential shortage of military pilots by requiring longer service obligations and offering retention bonuses. For example, the U.S. Air Force recently began offering retention bonuses of up to $225,000 to its fighter jet pilots in exchange for a 9-year commitment. This is an increase from the Air Force’s previous retention offer of a 5-year contract for up to $25,000 per year, for a maximum of $125,000, in exchange for the commitment. Similarly, starting in fiscal year 2013, the U.S. Navy began offering retention bonuses to its pilots ranging from $25,000 to $125,000 for a 5- year commitment and paid over the term of the contract. However, one small regional airline we interviewed recently announced an agreement with the unions that represent their pilots to increase pilot pay, but final approval is subject to ratification by the airline’s pilot membership. If ratified by the pilots, the agreement will immediately increase pay and commuting and schedule flexibility, and allow all pilots who remain with the airline for a year to earn a cash retention bonus. negotiation of collective bargaining agreements between airlines and the pilot unions that represent the employed pilot workforce. As previously mentioned, raising wages is not a costless remedy. Since regional airlines generally provide service under capacity purchase agreements with mainline airlines on a contractual basis, regional airlines’ ability to increase wages would likely be limited by their ability to increase revenue (i.e., increasing passenger fares). Finally, economic literature indicates that contracting out or turning down work are options to cope with a labor shortage. Mainline airlines normally contract with regional airlines to expand available service. As previously mentioned, representatives of five regional airlines we interviewed told us that there have been some instances wherein the contracted capacity (i.e., scheduled flights) for mainline airline partners has had to be turned down by reducing and canceling flights due to a lack of pilot crew availability. According to an official of a small regional airline, for the first time in its history, the airline had to reduce about 20 percent of its scheduled flights in August 2013 because it could not staff all of its airplanes to provide the scheduled flights. Again, such actions are not costless and pose implications for the industry. A continued shortage of pilots for these airlines could mean additional curtailment of services, and thus far, it is smaller communities that are experiencing reduced service, and over a longer term may result in a contraction of the industry. While no one agency is tasked with developing the pilot workforce, several maintain programs that help promote and train people for aviation-related careers. At the time of its creation in 1958, the FAA was tasked with regulating, promoting, encouraging, and developing civil In 1996, following criticism of its response to the ValuJet aeronautics.crash in the Florida Everglades and to address concerns about its dual role, FAA’s mission was amended to make ensuring the safety of the national air space system as the agency’s top priority. According to FAA, it has continued to promote careers in aviation, but specific references were deleted from its mandate. Nonetheless, FAA has several initiatives aimed at promoting the aviation industry and encouraging young people to pursue careers in aviation. For example, FAA developed the Aviation Career Education Academies, interactive aviation summer camps geared toward middle- and high-school students interested in aviation and aerospace; the agency also promotes DOT’s National Transportation Summer Institute to introduce secondary school students to all modes of transportation careers and encourage them to pursue transportation-related courses of study at the postsecondary education FAA also works with education and industry partners to offer level.initiatives such as adopt-a-school programs and other activities that expose students and others to aviation and aerospace. FAA works with industry, including the Experimental Aircraft Association, to facilitate the Young Eagle Program, which seeks to expose young people to aviation and give them an opportunity to fly in a general aviation airplane. In addition, FAA’s Aviation and Space Education website is intended to appeal to an audience unfamiliar with aviation, such as students and teachers. Other federal agencies provide financial assistance that is available for students that pursue aviation careers, including pilot training. DOD provides Military Tuition Assistance benefits to service members to help them enhance their professional development. The benefits can be used for pilot training or to pay for certification tests, such as an ATP certification. Education offers various federal aid benefits, such as low-interest student and parent loans, grants, and work-study funds to help cover Collegiate aviation schools and some educational expenses.vocational pilot schools are generally eligible to receive federal financial aid. VA administers education benefit programs, such as the Montgomery G.I. Bill, that can be used to pay for flight training for veterans who are interested in attending aviation programs approved by FAA, such as collegiate aviation schools and some vocational pilot schools. The payment amount varies depending on the program and the type of pilot school. In addition, a 2011 law amended the Montgomery G.I. Bill program to provide financial assistance to veterans specifically for flight-training programs. DOL administers programs under the Workforce Investment Act of 1998 (WIA) in which training services are available to eligible individuals who meet requirements for services—including training to become an airline pilot. However, according to DOL, due to limited available resources, workforce counselors encourage individuals eligible for WIA training funds to also pursue educational funding from other sources (including VA and Education). Nevertheless, according to DOL data from 2010 through 2012, 124 individuals received WIA funding for pilot training. In addition, apprenticeships are available for pilot occupations,November 2013. but there were no active apprentices as of The Internal Revenue Code also provides tax credits—such as the American Opportunity Credit and Lifetime Learning Credit—and various deductions that may be taken to reduce the federal income tax burden for students or those paying the costs of students’ post secondary education. Airline and pilot school stakeholders we interviewed suggested several actions that could be pursued by government to respond to potential shortages of airline pilots. These actions generally fell into two categories: (1) increasing the availability and flexibility of financial assistance available to aviation students and (2) creating additional pathways to becoming an airline pilot. Several airline and pilot school officials we interviewed stated that the high cost of pilot training is deterring students from entering pilot school and pursuing an airline pilot career. To pay for pilot training, students typically use a mix of personal funds, personal credit (credit cards and personal loans), scholarships, grants, other private educational loans, and federal financial-assistance programs. However, flight school officials said that students enrolled in collegiate aviation schools and vocational pilot schools are finding it more difficult to qualify for financial aid because many private banks have been tightening restrictions on financing available to potential new-pilot students, and others have left the pilot training loan market. We previously found that in 2009, many lenders offering student loans had exited the market due to limited access to capital in response to the 2007-2009 financial crisis. Since that time, according to officials of some pilot schools we interviewed, stricter lending standards continue to make it difficult for some students and parents to qualify for private loans. In addition, unlike colleges and universities, many vocational pilot schools are not approved or accredited to offer federal financial-aid programs. Some of these schools offer financing options for those students who qualify by working with lenders including banks, credit unions, and private lending institutions. A number of stakeholders suggested that making it easier for all pilot schools to participate in federal student-loan programs could make it easier for schools to train more pilots because many students drop out due to financial difficulties. Aviation stakeholders we interviewed in previous work agreed that one of most important challenges for maintaining an adequate supply of students for pilot schools is the availability of financial support. Several airline and pilot school officials said the federal government could consider revising the existing student loan requirements for students in pilot schools seeking to become airline pilots—such as extending the loan repayment period, deferring the start of repaying the loan, and increasing the maximum loan amount—or establishing a student-loan repayment or forgiveness program for airline pilots. Loan forgiveness programs may include criteria for a specified length of employment and a required period of timely payments, upon which all or a portion of the remaining loan balance would be eliminated. Some stakeholders suggested that revising loan requirements could provide incentives to attract individuals to the pilot profession. We have also previously found that European airlines have at times funded the training of pilot candidates in response to pilot shortages. In the European countries that we visited for our previous work, many student pilots, following a screening process, were provided training by airline sponsorship with an agreement for future employment with the airline. An example of an airline that follows this practice is Lufthansa, where students are offered the training as part of a partial sponsorship program, wherein candidates are required to pay a small portion of the training costs upfront while Lufthansa provides a student loan to students to cover this cost. Once training is completed, Lufthansa enters into an employment contract with the candidate, and he or she repays the loan by accepting a lower initial salary. Other European airlines have begun to assist their students by forming agreements with banks to reduce the risk of providing student loans to flight school students. British Airways helps students secure the funding required for training through a guaranteed bank loan in the hopes that this will increase the pool of qualified applicants. KLM partially funds an insurance policy to help banks cover their student loan default risks for students who end their pilot training early due to poor performance, failed medical examinations, or other unforeseen circumstances. If the insurance policy is executed, students are contractually obligated to cease their pursuit of an airline pilot career. None of the U.S. airlines we interviewed were currently considering such approaches. Some stakeholders suggested that FAA should consider supplementing the current regulatory framework for training new pilots with additional pathways to achieving an ATP certificate. Stakeholders have made these suggestions because the new pilot qualification rule changed the traditional pathway to becoming an airline pilot, and airlines initial experience under the new rule suggests that the flight hours new pilots are earning to qualify for an ATP certificate may not be directly relevant to an airline setting. Based on an exemption request to FAA from one of the member airlines, the Regional Air Cargo Carriers Association (RACCA) has supported a regulatory change that would allow first officers in Part 135 cargo-only operations to log certain flight hours that they are currently prohibited from logging, except under limited circumstances. According to RACCA officials, these first officers are frequently recent graduates of flight-training programs with commercial pilot certificates, and allowing the hours flown in these operations to count would give these pilots flight experience toward the qualifications for an ATP certificate that is more commensurate with flying for a passenger airline, since they are flying similar planes under similar conditions—unlike the flight hours logged in flight instruction using training airplanes, or through banner towing and similar types of flight experiences. According to FAA officials, FAA is in the process of developing a proposed rulemaking that could expand the logging of flight time for certain Part 135 operations. A proposal being developed by a consortium of industry stakeholders would request that FAA consider new regulations allowing the airline industry to take greater advantage of the advancements in computer- based and simulation technology for training pilots. According to the group, U.S. pilot training requirements for certification of airline pilots have not been significantly changed for decades and pilots have had to complete the same certification path based on the same training standards and requirements. While the standards for obtaining pilot certificates have changed little over the years, training technology has advanced through the use of simulation and computers. The group suggested that FAA should allow more credit for training using this type of technology in lieu of actual flying. The group argues that training aids provided by computer software, computer-based simulation, and flight simulation training can help students to achieve as good or better competency in various training components, such as aircraft performance, navigation, and aircraft systems operations. In fact, many of the collegiate aviation schools already provide specialized training in flight-simulation training devices, but FAA allows only a few of these training hours to be credited toward private and commercial pilot certificates. According to industry consortium, the ability to expand the use of these technologies would enable pilot schools to train the next generation of pilots more efficiently and improve the overall competency of entry-level first officers. Many Asian and European countries have already adopted a similar approach in the form of the multi-crew pilot license (MPL)—an alternative pilot training and certification concept specifically geared toward training airline pilots. The training methods for the MPL are focused on enhancing the quality of training geared toward first officer duties. Such competency-based training for pilots is not new and focuses on the training outcome in terms of how well students perform rather than simply meeting specified numbers of training hours. Thus, training hours are replaced by sets of defined, measureable performance criteria. The MPL training model focuses on the core competencies that pilots need to be able to operate modern jet airplanes during all phases of flight. Many of the airline officials we interviewed suggested that this model for pilot training could serve as an additional career pathway for becoming a U.S. airline pilot. Availability of a sufficient number of qualified pilots is vital to the U.S. airline industry and necessary to support air transportation services for passengers and cargo traveling within the United States or to and from this country. Evidence suggests that the supply pipeline is changing as fewer students enter and complete collegiate pilot-training programs and fewer military pilots are available than in the past. Additional pressure on pilot availability will come from (1) the projected number of mandatory age-related pilot retirements at mainline airlines over the next decade and beyond, (2) the increasing demand for regional airlines to address attrition needs, and (3) the reported lower number of potentially qualified pilots in the applicant pool for filling regional airlines’ first-officer jobs. If the predictions for future demand are realized and shortages continue to develop, airlines may have to make considerable operational adjustments to compensate for having an insufficient number of pilots. To address such a situation, opportunities exist for the airline industry to take action to attract more pilots. For example, airlines can continue to take actions that will promote aviation as an occupation—such as through employment pathway partnerships with pilot schools and additional career and financial support for pilots as they build flight hours for an R-ATP or ATP certificate. In addition, mainline and regional airlines could work together to shift some of the burden of increasing training costs from students as has been done by some European airlines and adjust contractual agreements between mainline and regional airline partners to help regional airlines increase revenue. Furthermore, with the mandate to increase pilot qualifications for airline pilots having only recently gone into effect, opportunities exist to develop new training methods and pathways for students to gain experience relevant to an airline environment. It is unclear at this point what adjustments could occur within the pilot training system that would help to respond to these stakeholders’ concerns about the current regulations, or if government action may be necessary to enable certain changes. Therefore, we encourage FAA to continue its efforts in working with the airline and pilot training industries in considering additional ways for pilots to build quality flight time that contributes directly to working in airline operations. In the absence of efforts discussed in the report to incentivize and attract more people to the career, several airlines and industry stakeholders expressed some concern that service to some small communities may suffer going forward. Given the opportunities available for the industry to address a possible shortage of pilots, as discussed, as well as actions FAA is considering, we are not making recommendations in this report. In the event that Congress decides that actions in the market are not sufficient and it is necessary for government to intervene, this report offers several options for doing so. We provided a draft of this report to the departments of Defense, Labor, and Transportation for review and comment. DOD had no comments on the report. DOL and DOT provided technical comments that we incorporated as appropriate. In addition, to verify information, we sent relevant sections of the draft report to Airlines for America, the Regional Airlines Association, Malcolm Cohen, Ph.D., and various stakeholders, which also provided technical comments that we incorporated as appropriate. We will send copies of this report to interested congressional committees and members; the Secretary of Defense; the Secretary of Labor; the Secretary of Transportation; the Director, Office of Management and Budget; and others. This report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Our report focuses on the supply of and demand for airline pilots and potential market and government responses. In this report, we described (1) what the available data and forecasts reveal about the need for and potential availability of airline pilots and (2) the types of industry and government actions that are being taken, or might be taken, to attract and retain airline pilots. To address the two objectives, we reviewed and synthesized a range of published reports from GAO, the Department of Transportation (DOT), and the Federal Aviation Administration (FAA) that included general background information on a variety of related issues, such as the pilot certification process; pilot training schools; typical career paths to become an airline pilot; piloting experience and airline pilot compensation; federal- funding programs for pilot training; and the historical and current health of the airline industry. We also reviewed relevant literature related to factors that affect the supply of and demand for airline pilots, including attrition and retention concerns, factors to consider in the future, and international pilot supply and demand issues based on search results from databases, such as ProQuest®, TRID, and Nexis®, as well as trade publications, industry stakeholder groups, and the Internet. Furthermore, we reviewed the federal aviation regulations related to training and certification for pilots under Parts 61 and 141, Title 14, Code of Federal Regulations (CFR); as well as oversight of air travel operations in accordance with Parts 91, 121, and 135, Title 14, CFR. We also reviewed provisions of the Airline Safety and Federal Aviation Administration Extension Act of 2010 (Pub. L. No. 111-216) related to “Flight Crewmember Screening and Qualifications” and “Airline Transport Pilot Certification.” We reviewed FAA’s regulatory final rules required by the Act related to addressing pilot fatigue (issued in January 2012); increasing qualification requirements for first officers who fly U.S. passenger and cargo planes (issued in July 2013); and enhancing pilot training requirements for airline pilots (issued in November 2013). To determine what the available data and forecasts reveal about the need for and potential availability of airline pilots, we reviewed relevant economic literature that describe labor market conditions; developed a summary of the general economic principles for evaluating labor market conditions; and identified relevant data sources. Economic literature states that no single definition exists to define a labor shortage; however, one can look at multiple indicators—including unemployment rates, employment numbers, and earnings—which might converge to suggest either the presence or absence of a shortage. We obtained these data from the Bureau of Labor Statistics (BLS) Current Population Survey (CPS) for years 2000 through 2012. In 2010, the Standard Occupational Classification (SOC) system’s occupation titles were updated and, as a result, some occupations’ names were changed. We used SAS, a statistical software application, to connect the BLS CPS data for 2000- 2010 and 2011-2012 by the SOC for aircraft pilots; this did not affect our occupation of interest. We analyzed how these indicators have changed over time, and whether these indicators suggest a labor shortage—that is whether there appears to be an imbalance between the labor supply (i.e., available people) and demand (i.e., available jobs). We analyzed each occupation relative to all other occupations and using a scale with benchmarks developed in previous economic analysis. For the unemployment rate we looked at the average unemployment rate for each occupation for 2000 through 2012. For both employment and earnings we analyzed any change. Due to the limitation that airline pilots and commercial pilots are combined into a single occupational category in the CPS data, we also obtained data from the BLS Occupational Employment Statistics (OES) survey for employment and wage earnings and analyzed any change from 2000 through 2012. To verify our results, we consulted with Malcolm Cohen, Ph.D., labor economist and author of the original methodology for conducting indicator analysis. We incorporated his comments as appropriate. Finally, we summarized limitations with the data with respect to how we used it. We determined the data were sufficiently reliable for the purposes of our indicator analysis to provide context on the labor market. To identify future demand for, supply of, or employment, we analyzed projections for airline pilots in the United States. To identify relevant studies, we performed a literature review of scholarly material, government reports, and books, among others, to identify any employment projections for airline pilots and limited our results to those projecting employment in the United States (or North America) using databases that included ProQuest®, TRID, and Nexis®. We identified three demand-based forecasts—two conducted by government (FAA Aerospace Forecast Fiscal Years 2013-2033 and BLS Employment Projections 2012-2022), and one conducted by industry (Boeing Current Market Outlook 2013-2032)—and obtained each for analysis. To understand these projections, we reviewed the processes, methodologies, and sources of information used to make the projections. We also discussed the projections with knowledgeable staff involved with each study. We did not verify the data that the companies collected and used. Rather, we summarized the methodology and results for each and discussed any limitations we identified with respect to how the forecast was developed. We also described, based on economic literature, why forecasting generally includes a great deal of uncertainty. We also identified and reviewed three relevant industry and academic studies that focused on the supply of and demand for airline pilots. The reviewed studies included (1) Lovelace, Higgins, et al, An Investigation of the United States Airline Pilot Labor Supply, 2013; (2) Brant Harrison from Audries Aircraft Analysis, Pilot Demand Projections/Analysis for the Next 10 Years Full Model, 2013; and (3) the MITRE Corporation, Pilot Supply Outlook, 2013. To evaluate these studies, we reviewed their methods, assumptions, and limitations. Each study was reviewed by one GAO economist, whose review was then verified by a second GAO economist. In our review of An Investigation of the United States Airline Pilot Labor Supply, we replicated the study’s analysis using data provided by the lead researchers, which raised questions about a specific assumption made about future increases in the cost of pilot training. To determine the extent to which the conclusions of the study were based on this specific assumption, we varied the assumption to determine the extent to which that would lead to a different conclusion. We discussed our analysis in detail with the lead researchers, and in general, they acknowledged that our findings were valid, but provided reasons to explain why the original assumption used in the study was warranted. To identify trends in supply sources for qualified airline pilots, we obtained data from 2000 through 2012 from civilian and military sources for pilots. We analyzed data from the Department of Education (Education) on annual completions by major in professional pilot programs; data from the Department of Defense (DOD) on expectations for the number of new pilots entering military service and separating from the military; and FAA’s data on the number of individuals holding and obtaining pilot certificates and instrument ratings by year, specifically: Education: To describe national trends in completions in professional pilot degree programs, we analyzed data from Education’s Integrated Postsecondary Education Data System (IPEDS). We used Education’s Classification of Instructional Programs (CIP) and matched degree programs to our SOC codes to identify the relevant degree programs. Specifically, the CIP-SOC relationship indicates that programs classified in the CIP category prepare individuals directly for jobs classified in the SOC category. The categories of schools included in our analysis were degree granting: 4-year research, 4-year master, 4-year baccalaureate, 2-year associate, and vocational schools. Unless otherwise noted, data estimates for graduation rates are within a confidence interval of 5 percentage points. DOD: To better understand the role of the U.S. military as a source of potential airline pilots, we obtained data on military pilots separating from the Service branches (i.e., the Air Force, Army, Marine Corps, and Navy); the current number of pilots in each Service; and forecasted rates of separation for pilots. We interviewed military officials at the Pentagon to understand how separation trends in the future will compare to past trends. FAA: To better understand trends in the number of pilot certificates and instrument ratings held and new certificates issued, and age distribution of current airline transport pilot (ATP) certificate holders, we obtained data from FAA on pilot certificates and instrument ratings held and issued from 2000 through 2012. We also obtained data from FAA on the estimated number of active ATP certificates held by age group during this period in order to exclude the number of certificates held by pilots age 65 and older because they would not be allowed to work as airline pilots due to mandatory age retirement. The database in which certificate-holder information is stored maintains records on individuals until FAA is informed of their death. To assess the reliability of Education, DOD, and FAA data, we reviewed documentation related to all data sources from prior GAO reports, the agencies’ websites, and interviewed knowledgeable government officials about the quality of the data. We determined that the data were sufficiently reliable to describe general sources of supply of airline pilots and to support broad conclusions about trends in these sources over recent years. To develop our list of actions that employers may take to mitigate labor shortages, we reviewed economic literature and interviewed the authors. We also interviewed selected industry associations that represent airlines, the unions that represent pilots, and government officials to get a broader sense of the extent to which employers are taking actions to mitigate labor shortages. To supplement these broader trends, we also reviewed data from and interviewed representatives from passenger and cargo airlines, and selected collegiate aviation and non- collegiate vocational pilot schools. We contacted and gathered information from 10 mainline passenger and cargo airlines, and 12 regional passenger airlines. We selected the mainline and regional airlines based on size in terms of passengers transported in 2012 and stakeholders’ recommendations. While these 12 regional airlines are responsible for transporting about 71 percent of regional passengers in 2012, their views and experiences should not be used to make generalizations about all regional airlines. We also interviewed representatives of 10 collegiate aviation and 2 non-collegiate vocational pilot schools, which accounted for about half of the students who graduated with professional pilot majors in 2012. We selected these schools based on geographical diversity, average number of student enrollments in pilot training programs, stakeholders’ recommendations, and our previous work related to pilot training. While these schools were among the largest schools in terms of student pilot enrollments, our findings should not be used to make generalizations about the views or experiences of all of the pilot training schools in the United States. We also met with and reviewed documents from various industry stakeholders, including pilot labor unions, airline associations, and industry organizations, among others (see table 2). We conducted this performance audit from March 2013 through February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following individuals made important contributions to this report: Andrew Von Ah, Assistant Director; Amy Abramowitz; Benjamin Bolitzer; Russell Burnett; Vashun Cole; Dave Hooper; Bonnie Pignatiello Leer; John Mingus; Susan Offutt; Joshua Ormond; and Amy Rosewarne.
Over 66,000 airline pilot jobs exist for larger mainline and smaller regional airlines that operate over 7,000 commercial aircraft. After a decade of turmoil that curtailed growth in the industry and resulted in fewer pilots employed at airlines since 2000, recent industry forecasts indicate that the global aviation industry is poised for growth. However, stakeholders have voiced concerns that imminent retirements, fewer pilots exiting the military, and new rules increasing the number of flight hours required to become a first officer for an airline, could result in a shortage of qualified airline pilots. GAO was asked to examine pilot supply and demand issues. This report describes (1) what available data and forecasts reveal about the need for and potential availability of airline pilots and (2) what actions industry and government are taking or could take to attract and retain airline pilots. GAO collected and analyzed data from 2000 through 2012, forecasts from 2013 through 2022, and literature relevant to the labor market for airline pilots and reviewed documents and interviewed agency officials about programs that support training. GAO interviewed and collected data from associations representing airlines or their pilots, and pilot schools that accounted for about half of the students who graduated with professional pilot majors in 2012. GAO selected the airlines and schools based on factors such as size and location. GAO is not making recommendations in this report. The Department of Transportation and others provided technical clarifications on a draft of the report, which GAO incorporated. GAO found mixed evidence regarding the extent of a shortage of airline pilots, although regional airlines have reported difficulties finding sufficient numbers of qualified pilots over the past year. Specifically, looking at broad economic indicators, airline pilots have experienced a low unemployment rate—the most direct measure of a labor shortage; however, both employment and earnings have decreased since 2000, suggesting that demand for these occupations has not outstripped supply. Looking forward, industry forecasts and the Bureau of Labor Statistics' employment projections suggest the need for pilots to be between roughly 1,900 and 4,500 pilots per year, on average, over the next decade, which is consistent with airlines' reported expectations for hiring over this period. Yet studies GAO reviewed examining whether the future supply of pilots will be sufficient to meet this need had varying conclusions. Two studies point to the large number of qualified pilots that exists, but who may be working abroad, in the military, or in another occupation, as evidence that there is adequate supply. However, whether these pilots choose to seek employment with U.S. airlines depends on the extent to which pilot job opportunities arise, and on the wages and benefits airlines offer. Another study concludes that future supply will be insufficient, absent any actions taken, largely resulting from accelerating costs of pilot education and training. Such costs deter individuals from pursuing a pilot career. Pilot schools that GAO interviewed reported fewer students entering their programs resulting from concerns over the high costs of education and low entry-level pay at regional airlines. As airlines have recently started hiring, nearly all of the regional airlines that GAO interviewed reported difficulties finding sufficient numbers of qualified entry-level first officers. However, mainline airlines, because they hire from the ranks of experienced pilots, have not reported similar concerns, although some mainline airlines expressed concerns that entry-level hiring problems could affect their regional airline partners' ability to provide service to some locations. Airlines are taking several actions to attract and retain qualified commercial airline pilots. For example, airlines that GAO interviewed have increased recruiting efforts, and developed partnerships with schools to provide incentives and clearer career paths for new pilots. Some regional airlines have offered new first officers signing bonuses or tuition reimbursement to attract more pilots. However, some airlines found these actions insufficient to attract more pilots, and some actions, such as raising wages, have associated costs that have implications for the industry. Airline representatives and pilot schools suggested FAA could do more to give credit for various kinds of flight experience in order to meet the higher flight-hour requirement, and could consider developing alternative pathways to becoming an airline pilot. Stakeholders were also concerned that available financial assistance may not be sufficient, given the high costs of pilot training and relatively low entry-level wages.
Initially referred to as the “Next Generation Space Telescope,” JWST is a large deployable, infrared-optimized space telescope intended to be the successor to the aging Hubble Space Telescope. JWST is designed to be a 5-year mission to find the first stars and trace the evolution of galaxies from their beginning to their current formation, and is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. In a 2001 decadal survey, the National Research Council rated the JWST as the top-priority new initiative for astronomy and physics. With its 6.5-meter primary mirror, JWST will be able to operate at 100 times the sensitivity of the Hubble Space Telescope. A tennis-court-sized sunshield will protect the mirrors and instruments from the sun’s heat to allow the JWST to look at very faint infrared sources. The Hubble Space Telescope operates primarily in the visible and ultraviolet regions. JWST has experienced significant increases to project costs and schedule delays. Prior to being approved for development, cost estimates of the project ranged from $1 billion to $3.5 billion with expected launch dates ranging from 2007 to 2011. In March 2005, NASA increased the JWST’s life-cycle cost estimate to $4.5 billion and slipped the launch date to 2013. We reported in 2006 that about half of the cost growth was due to schedule slippage—a 1-year schedule slip because of a delay in the decision to use a European Space Agency-supplied Ariane 5 launch vehicle and an additional 10-month slip caused by budget profile limitations in fiscal years 2006 and 2007. More than a third of the cost increase was caused by requirements and other changes. An increase in the program’s contingency funding accounted for the remainder—about 12 percent—of the growth. NASA Headquarters chartered an Independent Review Team to evaluate the project that same year. In April 2006, the review team’s assessment confirmed that the program’s technical content was complete and sound, but expressed concern over the project’s contingency reserve funding—funding used to mitigate issues that arise but which were previously unknown—reporting that it was too low and phased in too late in the development life cycle. The team reported that for a project as complex as the JWST, a 25 to 30 percent total contingency was appropriate. At that time, JWST’s total contingency was about 19 percent. The team cautioned that this contingency compromised the project’s ability to resolve issues, address risk areas, and accommodate unknown problems. The team also concluded that the 2013 launch date was not viable for the project based on its anticipated budget. It recommended that before the project was formally approved for development and baselined, NASA should take steps to provide the JWST project with adequate time-phased reserve funding to secure a stable launch date. Additional reserves were added and the project was baselined in April 2009 with a life-cycle cost estimate of $4.964 billion and a launch date in June 2014. Shortly after JWST was approved for development and its cost and schedule estimates were baselined, project costs continued to increase. In 2010, Senator Barbara Mikulski, chair of the Senate Committee on Appropriations, Subcommittee on Commerce, Justice, Science, and Related Agencies, asked NASA to initiate another independent review in response to the project’s cost increases and reports that the June 2014 launch date was in jeopardy. The Independent Comprehensive Review Panel (ICRP) was commissioned by NASA and began its review in August 2010. In October 2010, the ICRP issued its report and cited several reasons for the project’s problems including management, budgeting, oversight, governance and accountability, and communication issues. The panel concluded JWST was executing well from a technical standpoint, but that the baseline funding did not reflect the most probable cost with adequate reserves in each year of project execution, resulting in an unexecutable project. The review panel recommended that additional resources be considered along with organizational and management restructuring. Following this review, the JWST program underwent a replan in 2011. In November 2011, the JWST project was reauthorized, but not before it was recommended for termination by the House Appropriations Committee. On the basis of the replan, NASA announced that the project would be rebaselined at $8.835 billion—a 78 percent increase to the project’s life-cycle cost from the confirmed baseline—and would launch in October 2018—a delay of 52 months. The revised life- cycle cost estimate included 13 months of funded schedule reserve. In the President’s Fiscal Year 2013 budget request, NASA reported a 66 percent joint cost and schedule confidence level associated with these estimates. A joint cost and schedule confidence level (JCL) is the process NASA uses to assign a percentage to the probable success of meeting cost and schedule targets and is part of the project’s estimating process. The JWST project is divided into three major segments: the launch segment, the ground segment, and the observatory segment. The launch segment is primarily provided by the European Space Agency (ESA), which is contributing the Ariane 5 launch vehicle and launch site operations in French Guiana. The ground segment will be responsible for collecting the data obtained by JWST in space and making it usable for scientists and researchers. This includes the development of software that will translate data into usable formats as well as operation of the software once the telescope is in space. The Space Telescope Science Institute, operated by the Association of Universities for Research in Astronomy (AURA) on a contract awarded by NASA, which currently performs science operations for the Hubble Space Telescope, is developing the science and operations and flight operations center for JWST and will conduct the first 6 months of flight and science operations. The NASA contract with the Space Telescope Science Institute extends through the first 6 months of JWST operations. A contract to manage the long term operations of JWST is planned to be awarded approximately 2 years prior to launch. The observatory segment will be launched into space and includes five major subsystems. These subsystems are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1. JWST is a single project program reporting directly to the NASA Associate Administrator for programmatic oversight and to the Associate Administrator for the Science Mission Directorate for technical and analysis support. Goddard Space Flight Center is the NASA center responsible for the management of JWST. See figure 2 for the current JWST organizational chart. Our analysis of JWST’s revised cost estimate showed that it is not fully consistent with best practices for developing reliable and credible estimates, although project officials took some steps in line with best practices in the development of the estimate. For example, as part of its cost estimation process, the project conducted a joint cost and schedule risk analysis, or joint cost and schedule confidence level (JCL), which assigned a 66 percent confidence level to the estimate. In addition, we found that the cost estimate included all life cycle costs for the project. Although NASA’s methods for developing the JWST cost estimate reflect some features of best practices, our review of the estimate showed that based on best practice criteria, it did not fully meet the four characteristics of a reliable estimate. See figure 3. Specifically, the project’s estimate was found to substantially meet the best practice criteria for being comprehensive, and the remaining three characteristics of being well documented, accurate, and credible were found to be only partially met. For example, the accuracy of the cost estimate, and therefore the confidence level assigned to the estimate, was lessened by the schedule used in the JCL analysis because it prevented us from, among other things, identifying the activities that were on the critical path—defined as time associated with activities that drive the overall schedule. The credibility of the estimate was lessened because project officials did not perform a sensitivity analysis that would have identified key drivers of costs, such as workforce size. Although NASA is not required to adhere to these best practices, our prior work has shown that not following best practices for cost estimating can make the cost estimate less reliable, putting projects at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. The best practices stem from practices federal cost estimating organizations and industry use to develop and maintain reliable cost estimates, including the Department of Defense and NASA. According to program officials, it would have been difficult, if not impossible, for the project to have met all of the best practice criteria given the complexity of the project and that some elements of the project are quite mature in their development. Instead, the program manager stated that the project followed a tailored process to develop the cost estimate that was appropriate for the project. Furthermore, officials report the project is currently meeting a majority of its milestones and executing as planned to the revised estimates for the JWST. A work breakdown structure reflects the requirements and what must be accomplished to develop a program, and it provides a basis for identifying resources and tasks for developing a program cost estimate. The work breakdown structure should be used to define all program activities and tasks to ensure that the schedule encompasses the entire work. two was not compatible. Finally, although the project outlined and documented the ground rules and assumptions, we were unable to determine whether risks associated with any assumptions were identified and traced to specific elements. Well documented: The JWST cost estimate only partially met the criteria for being well documented because it did not include a step-by-step description of how the estimate was developed, the raw data used to develop the estimate, or the calculations and estimating methodology for specific cost elements of the work breakdown structure. Without good documentation, a cost analyst unfamiliar with the program will not be able to replicate the estimate, because he or she will not understand the logic behind it. Good documentation, for example, assists management and oversight in assessing the credibility of the estimate, helps to keep a history of reasons for cost changes and to record lessons learned, defines the scope of the analysis, and answers questions about the approach or data used to create the estimate. Project documentation, however, does provide evidence that NASA management reviewed and accepted the cost estimate because managers were briefed on the technical aspects of the estimate and were provided an overview of the joint cost and schedule risk analysis that was conducted. Accurate: The JWST cost estimate only partially met the criteria for being accurate because the projected costs of schedule reserve did not reflect actual data, the summary schedule used to derive the JCL prevented us from sufficiently understanding how risks were incorporated, and the project did not provide evidence that it regularly updates the estimate or plans to conduct another JCL. For example, using historical actual cost data from Northrop Grumman, we estimated that 13 months of schedule reserve is likely to be $204 million instead of NASA’s estimate of $121 million—a potential underestimation of 69 percent related to the schedule reserve. Project officials, however, believe they have adequate reserves available to offset any underestimation. In addition, the summary schedule the project used as an input to the JCL, although deemed acceptable by NASA, contained many long-duration activities, some with 1,000 days or more. Because of these long durations in the summary schedule used for the JCL, the lack of detail prevented us from identifying the activities that were on the critical path, as well as which risks were applied to remaining activities. As a result, there is no way to ensure that risks were appropriately assigned to activities in the schedule to account for the impact of the risks during the JCL analysis. Finally, it was unclear whether the cost estimate was regularly updated to reflect material changes in actual costs and in the project itself, such as when schedules or other assumptions change, due to a lack of detailed documentation for the cost estimate. Project officials stated that in keeping with NASA policy they do not plan, nor are they required, to conduct another JCL analysis. GAO’s cost estimating best practices call for estimates to be continually updated through the life of the project, ideally every month as actual costs are reported in earned value management reports, and that a risk analysis and risk simulation exercise—like the JCL analysis—be conducted periodically through the life of the program, as risks can materialize or change throughout the life of a project. Unless properly updated on a regular basis, the cost estimate cannot provide decision makers with accurate information to assess the current status of the project. NASA officials state that the life-cycle cost estimate is updated annually for the budgeting process, and that historical records such as earned value data were used to develop the estimate. They also stated that this information is updated in several different documents being provided to management; however, we were unable to determine how this information was used in updating the cost estimate on a regular basis. Credible: The JWST cost estimate only partially met the criteria for being credible because project officials did not adequately test and verify the reasonableness of the cost estimate and the schedule used in conducting the JCL did not have a valid critical path and contained durations that were too long to properly account for risks. For example, project officials said they did not perform a sensitivity analysis for the cost estimate. A sensitivity analysis identifies key elements that drive cost and permits analysis of different outcomes and is often used to develop cost ranges and risk reserves. NASA officials stated that the largest cost driver for the JWST project is the size of the workforce, which could have been subjected to a sensitivity analysis; yet, the cost model did not include a sensitivity analysis that would show how staff increasing or decreasing over time affects cost. In addition, NASA officials believe that all risks were sufficiently accounted for when conducting the JCL, however, the software used to conduct the JCL analysis does not recognize certain risks that officials had placed on activities in the project schedule and, therefore, some risks were discarded during the simulation. The schedule used to conduct the JCL was also summarized at such a high level that the durations were too long to effectively model the risks. For example, one of the activities that drove the launch date was over 4 years in duration and should have been broken down further prior to conducting the simulation. Moreover, the critical path in the JCL schedule consisted of six level of effort activities all with the same duration of 2,238 Level of effort activities should never be on the critical days in length.path because support activities should never drive any milestone finish date. As a result of the schedule used in the JCL not fully meeting best practices, we question the results of the analysis. Furthermore, the risk of having to carry the JWST workforce to support the project if delayed was not included since a sensitivity analysis was not performed. Project officials report that, instead, risk associated with the workforce was factored in when establishing cost reserves. In addition, project officials did not commission an independent cost estimate, which is considered one of the best and most reliable estimate validation methods because it shows whether other estimating procedures produce similar results, and it provides an independent view of expected program costs that tests the program office’s estimate for reasonableness.independent cost estimate has an increased risk of being underfunded An estimate that has not been reconciled with an because the independent cost estimate provides an objective and unbiased assessment of whether the project estimate can be achieved. Notably, however, project officials provided evidence that an independent cost assessment was done for the project at the request of the JWST Standing Review Board, the independent review team for the project, and the assessment was within 2 percent of the project’s estimated cost for the rebaseline. Project officials contend that the approach they used in developing the life-cycle cost estimate for the project is more accurate than the types of approaches often used to develop and independent estimate. We did not conduct a full schedule assessment to determine the reliability of the revised schedule based on best practices due to on-going contract negotiations. The project has an integrated master schedule developed as part of the replan; however, it is not finalized because major contract modifications have yet to be negotiated and definitized. Specifically, the modification to the Northrop Grumman contract, which accounts for approximately 40 percent of the total project cost and spans much of the work on the spacecraft and OTE, remains undefinitized more than a year after the project was rebaselined. Once the project completes negotiations for the contract modification and all schedule dates are set, the project can then have a measurable integrated master schedule. Project officials stated that the negotiation process and updating of associated schedules are planned to be complete in January 2013 for the Northrop Grumman contract modification—a year after submission of the latest update to its proposal for the replan. The project also reported that multiple audits of the proposals submitted by Northrop Grumman and its subcontractor by the Defense Contract Audit Agency have delayed definitization. Negotiations for the modification to NASA’s contract with the Space Telescope Science Institute to incorporate the October 2018 launch readiness date are not scheduled to be complete until spring 2013. Once all the contracts have been definitized and the project’s integrated master schedule is baselined, we plan to conduct a comprehensive best practices assessment of the reliability of the project’s schedule estimates. Project officials report that the JWST schedule has 14 months of reserve, which meets Goddard guidance for schedule reserve; however, only 7 of the 14 months are likely to be available for the last three of JWST’s five complex integration and test efforts. GAO’s prior work shows that it is during integration and test where problems are commonly found and schedules tend to slip. Given that JWST has a challenging integration and test schedule, this could particularly be the case. The project has made some significant progress in the past year, notably successfully completing development of the 18 primary mirror segments—considered JWST’s top technical risk. Nevertheless, ongoing challenges are indicative of the kinds of issues that can require a significant amount of effort to address. For example, instrument challenges have delayed the first integration and test effort. In addition, key long-term risks on subsystems with a significant amount of work remaining will not be retired until 2016. Currently, NASA’s plan for project oversight calls for one independent system integration review about 13 months before launch. While this is consistent with what NASA requires for its projects, this approach may not be sufficient for a project as complex as JWST. As a result, the current plan may be inadequate to ensure key technical and management issues are identified early enough to be addressed within the current integration and test phase schedule. JWST has a complex and lengthy integration and test phase, which includes five major integration and test efforts—ISIM, OTE, OTIS, spacecraft, and observatory. See figure 4 for the project reported dates for the major integration and test efforts and the schedule reserve allocated for each effort. Overall, project officials report that the critical path schedule has 14 months of reserve with 7 months after the ISIM and OTE integration and test efforts. If these efforts are delayed beyond those 7 months, they will impinge on the schedule for the remaining three integration and test efforts. Project officials stated that the baseline plan is for the OTIS integration and test effort to not begin earlier than May 2016. These officials reported it is likely that all of the 7 months of schedule reserve held by the OTE subsystem will be utilized during its integration and test prior to delivery to OTIS and that the OTE effort is on the critical path for the project. Therefore, the remaining integration and test efforts—OTIS, Spacecraft, and Observatory—will likely have at most 7 months divided among them to use if issues are found during integration and test. In addition to not likely being able to conserve any of the unused first 7 months of schedule reserve, the project has limited time allocated to the final three integration and test efforts, with between 2 to 4 months for each. This time could be used easily by the project if an issue were to arise during integration and test. An example of this is seen in the OTIS integration and test schedule, which currently has 3 months of schedule reserve. The final event in the OTIS integration and test effort is a lengthy cryo-vacuum test—the first time that the optics integrated with the instruments will be tested at operational temperatures near absolute zero (less than -400 degrees Fahrenheit)—that takes approximately 3 months, due to the requirements of the test. If an issue were to arise during this test that requires shutting the test down and working on the hardware, the chamber would have to be slowly warmed to a temperature safe for removal of the hardware from the chamber, work would be performed, and the 3-month test process would need to begin again. This could easily exhaust the available schedule reserve. Prior GAO work shows that it is during integration and test when problems are commonly found, and schedules tend to slip. A project official confirmed that this is the case because during integration and test the process is more sequential and there is less flexibility to move work around if problems are found. A NASA Inspector General report on the Mars Science Laboratory, another complex and high-cost mission, found that historically the probability that schedule-impacting problems will arise is commensurate with the complexity of the project. JWST is one of NASA’s most technologically complex projects to date. The project has made significant progress overcoming several technical challenges over the last year. In December 2011, for example, the project completed development of the 18 segments of the primary mirror—the project’s primary technology risk—approximately 6 weeks ahead of schedule. In addition, project officials stated that during the last year they were also able to accelerate other optics-related work, which added one month of funded reserve to the schedule, bringing the total to 14 months. Finally, the project successfully addressed an increase in the estimated amount of heat on the instruments, which otherwise could have pushed observatory temperatures close to where the optics would not function correctly. Although technical challenges are being overcome, the project will likely continue to experience additional challenges over the remainder of the project, given the significant portion and complexity of the work remaining. Four of six major subsystems have nearly 50 percent or more of their development work remaining based on its current budget information, although the dollar amounts associated with the work vary. See figure 5. Currently, the project is experiencing several technical issues that have required a significant amount of time and effort to address. For example, the spacecraft subsystem, which experienced delays in development prior to the replan, is currently estimated to be heavier than its mass limit.Spacecraft development has lagged behind other subsystems because it was viewed as a lower risk part of the project and was therefore not allocated funding when budgets were limited prior to the replan. In March 2010, the project passed its mission critical design review, which evaluated the project design and its ability to meet mission requirements and indicated that the design was ready for fabrication phase; however, the spacecraft was not included in this review due to its delayed development. Under the initial replan, which had constrained funding in fiscal years 2011 and 2012, the spacecraft critical design review was scheduled for June 2014; however, due to additional funding in the final agency-approved replan, the project was able to accelerate work and this review is now planned for December 2013. Project officials have been concerned with the mass of JWST since its inception because of the telescope size and the limits of available launch vehicles. Accordingly, mass limits have been allocated for each subsystem, including the spacecraft. Project officials stated that they expected to encounter mass growth on the spacecraft, but that the magnitude of the mass growth on the spacecraft was unexpected. As shown in figure 6, the current spacecraft projected mass exceeds its mass allocation. Primary drivers of the mass growth on the spacecraft are increases in the estimated weight of the wiring harnesses, which distribute power and electric signals between different parts of the observatory, the solar array, and other structures that make up the spacecraft. The burden to find ways to reduce mass has been primarily placed with the spacecraft because it was assessed by the project to have the least technical risk and because it is the least mature subsystem and can more easily accommodate design changes. Over 100 kilograms, or 220 pounds, of mass savings options are being evaluated by the project and Northrop Grumman, which is developing the spacecraft. Potential mass solutions have been identified by Northrop Grumman and the project; however, cost and risk vary with each solution and the project is still evaluating the trade-offs of the various solutions. Project officials stated that final decisions for all tradeoffs will need to occur before spacecraft critical design review in December 2013. The ISIM subsystem is experiencing technology and engineering challenges that resulted in the use of 18 of ISIM’s 26 months of schedule reserve. The schedule for the instruments needed for ISIM continues to slip, which could result in use of more schedule reserve. Based on the replan, all four instruments were to be delivered by September 2012; however, only two instruments were delivered by that time and those still have issues that must be addressed. The remaining two instruments are currently scheduled to be delivered at least 11 months late. See table 1 below for the instrument specific issues. In addition to the instrument delays, two other technical challenges associated with ISIM are: (1) the detectors used by three of the four instruments to capture infrared light in space are degrading and may need to be replaced, resulting in the addition of another round of cryo- vacuum testing—in which a test chamber is used to simulate the near absolute zero temperatures in space, and (2) issues with the development of the cryo-cooler system that removes heat and cools MIRI. In December 2010 the project became aware that the detectors in three of the instruments were degrading.million and 15 months of schedule reserve to replace the detectors were included in the replan. These additions covered the cost of manufacturing the detectors; fabrication, assembly, and test of new focal plane assemblies; changing the detectors on three instruments, and the addition of a third ISIM cryo-vacuum test. The manufacturing process for new detectors takes approximately 30 months, which means that they cannot be delivered until after the second round of ISIM cryo-vacuum testing in 2014. As a result, $2 million of the $42 million in the replan was used to add a third round of cryo-vacuum testing for ISIM. The third test will validate the performance requirement of the ISIM and is the only time the instruments are tested with the flight detectors. Changing the detectors requires disassembling the instruments from ISIM, a process that will risk damage to the structure and instruments. Project officials stated that they will continue to monitor the degradation rate of the current detectors because if the degradation rate is low, they may not replace the detectors. As a result, approximately $42 Development issues with a part of the cryo-cooler needed for MIRI have delayed its delivery to ISIM. In 2010, project officials realized that an essential valve in the cryo-cooler was leaking at rates that exceeded requirements. Following the results of a failure review board, the contractor manufactured a newly designed valve, but it also did not meet leak rate requirements. Project officials stated that a new valve design will not be manufactured in time for use in the first ISIM cryo-vacuum test. The project is concurrently developing three alternatives and authorized manufacturing for one of the alternatives in October 2012. Project officials stated that the MIRI cryo-cooler is particularly complex because it spans approximately 10 meters—or approximately 33 feet—through the entire JWST observatory. These issues combined required the use of 18 months of schedule reserve, which reduced ISIM’s schedule reserve from the 26 months established in the replan to 8 months before it is needed for integration with the OTIS subsystem. These types of issues are not uncommon among NASA programs as technical issues tend to arise when disparate parts are integrated and tested together for the first time. Given the complexity and cutting edge technology developed and used on JWST, it is expected that these kinds of issues will continue to materialize as the program moves through its complex integration and test program. Figure 7 shows the delay of instrument deliveries as well as changes to the ISIM integration and test and final delivery dates over the last year. Until the project is able to overcome the major issues with the instruments and other parts of the ISIM, it is likely that the schedule would continue to slip and may begin to affect the overall project schedule. ISIM still has 8 months of schedule reserve before the slipping of its schedule would affect the schedule for the remainder of the project. The instrument, detector, and cryo-cooler issues have all contributed to the delay in the ISIM integration and test schedule and the reduction of objectives that can be achieved in the first two rounds of cryo-vacuum testing. The first round of testing will not include two instruments, a final design of the cryo- cooler hardware, or new detectors. As a result project officials will only be able to gather risk reduction information on the FGS/NIRISS, MIRI, test procedures, and test support equipment from the first cryo-vacuum test. The project also has several known long term risks and challenges remaining. For example, risks related to OTIS, the sunshield, and the ground system subsystems are not scheduled to be addressed until late in project development. As of October 2012, seven of the top 10 project risks were related to the long-term risks associated with the OTIS and sunshield, most of which will not be resolved until 2016 or later. For example, several risks relating to OTE are not scheduled to be closed until the OTIS testing in the chamber at Johnson Space Center in February 2017. Project officials are adding risk mitigation through early and additional testing, where possible, to these subsystems. Prior to the replan, the ground system software was at high risk for not being completed before launch and many tasks were planned for completion after launch. Space Telescope Science Institute officials stated that the replan allows them to plan for completion of their work before launch on a more realistic time schedule, which decreases schedule and operational risk. A continuing challenge on the ground system is that some development and testing is dependent on the final design of subsystems such as the instruments, which continue to slip delivery dates. The project plans to hold independent and management reviews required for all projects during the integration and test phase, but this phase for JWST is particularly complex. JWST has five major integration and test efforts that span 7 years and only one independent mission-level technical review—the system integration review. The system integration review evaluates the readiness of the project and associated supporting infrastructure to begin system assembly, integration, and test, and evaluates whether the remaining project development can be completed within available resources. For JWST, this review is scheduled in September 2017, only 13 months prior to launch. Projects we reviewed that had recently launched, however, held their system integration review on average approximately 22 months prior to launch. The project has an internal review with participation from standing review board members planned before the beginning of OTIS integration and test activities begin, and it will be subject to independent lower level reviews conducted by the Goddard Systems Review Office of the integration and test process. In addition, key decision point D (KDP-D)—when the senior agency decision authority would approve the project to proceed into the system integration and test phase—is scheduled for December 2017, 3 months after the commencement of the final major integration and test activity. According to NASA policy, this review should be held prior to the start of the system integration and test phase of the project.shows that over 90 percent of expected integration and test funding will be spent on four major integration and test activities prior to the scheduled mission-level system integration review and KDP-D approval by NASA senior management. As a result, the current plan may be inadequate to ensure that key technical and management issues are identified early enough to be addressed within the current integration and test phase schedule. The JWST project has taken steps to improve communications and oversight of its contractors as part of the replanning activities. For example, based on recommendations from the ICRP, the project has instituted meetings at various levels throughout NASA and its contractors and subcontractors. In addition, the project has added personnel at contractor facilities, which has allowed for more direct interaction and quicker resolution of issues. The project also assumed responsibility of the mission-level systems engineering function from Northrop Grumman, a move that shifts the authority to make trades or decisions to NASA. An independent NASA review of the project conducted in May 2012 found, however, that agencywide reductions in travel budgets have put the effectiveness of the JWST project’s oversight plans in jeopardy. While the project received partial relief from travel budget reductions in fiscal year 2012, project officials are concerned that the current level of oversight will not be sustained if similar cuts in travel funding occur in future years as anticipated. The project is also taking steps to enhance its oversight of project risks by implementing a new risk management system. The new project manager found that the previous system lacked rigor and was relatively ineffective for managing project risks, especially for a project as complex as JWST. The new system should allow for better tracking of risks than did the previous system. While these enhancements to the oversight of the project are steps in the right direction, it will take time to assess their effectiveness. Based on recommendations in the ICRP report, NASA has taken action to enhance oversight and communications. See table 2 for the ICRP recommendations and actions taken by NASA in response. NASA has taken steps to increase communication between the project and its contractors and subcontractors in an effort to enhance oversight. According to project officials, the increased communication has allowed them to better identify and manage project risks by having more visibility into contractors’ activities. The project reports that a great deal of communication existed across the project prior to the ICPR and replan; however, improvements have been made. For example, monthly meetings between project officials at Goddard and all of the contractors have continued on a regular basis and include half-day sessions devoted to business discussions. The project reports that these meetings have benefits over other forms of communication. For example, it was through dialogue with several technical leads at Northrop Grumman during detailed reviews of analytical models that the project identified that the mass issue on the spacecraft was likely to occur. In addition, the project has increased its presence at contractor facilities as necessary to provide assistance with issues. For example, the project has had two engineers working on a recurring basis at Lockheed Martin to assist in solving problems with the NIRCam instrument. The ISIM manager said that these engineers have insight into Lockheed Martin’s work and are having a positive effect as they offer technical help and are involved in devising the solutions to issues. He added that that these engineers have authority to make decisions on routine issues to allow the work flow to continue, but decisions that are more complex or require a commitment of funds are communicated to project management for disposition. The project reports that the Jet Propulsion Laboratory, responsible for NASA contribution to the MIRI instrument and its associated cryo-cooler, has an in-house representative in the responsible Northrop Grumman division to monitor the work being performed on the cryo-cooler. The JWST project also assumed full responsibility for the mission system engineering functions from Northrop Grumman in March 2011. NASA and Northrop Grumman officials both said that NASA is better suited to perform these tasks. Project officials stated the systems engineering requires the ability to make trades and decisions across the entire observatory, and because Northrop Grumman is only responsible for portions of the observatory, it did not have the authority to make trades or decisions for areas outside of its control. Although responsibility for the overall mission systems engineering function was removed from Northrop Grumman, it retains system engineering responsibility for work still under its contract, such as development of the spacecraft and sunshield. The ICRP noted that a highly capable, experienced systems engineering group is fundamental to project success and appropriate to ensure accountability especially for a project of JWST’s complexity and visibility. While these enhancements to the oversight of the project are steps in the right direction, it will take time to assess their effectiveness. In addition, sustainment of these efforts on the part of the project will be important. Project and contractor officials we spoke with believe that the increased communication has had a positive effect on the relationships between them. We will continue to monitor the interaction between the project and its contractors and its frequency in future reviews to identify whether the changes have had the desired results. The JWST project reported that its travel budget was reduced by approximately $200,000 from the $1.2 million planned in fiscal year 2012 as a result of NASA’s implementation of an Executive Order to promote According to project officials, the changes in more efficient spending.oversight necessitated by a reduction in travel funds represent a major shift away from the management paradigm adopted during the replan. Proposed reductions in future fiscal years could significantly reduce the project’s travel budget. The project reports that the travel requirements for fiscal years 2013 through 2015 are $1.6 million, $1.7 million, and $1.8 million, respectively. Officials reported that while travel is a small percentage of the project’s annual budget, the majority of expected travel—about 87 percent—is for oversight functions put in place as a result of the ICRP recommendations, such as having a permanent on-site presence at Northrop Grumman. These oversight functions include attending and participating in contractor monthly programmatic and technical reviews, technical interface meetings, recurring on-site presence at contractor facilities for quality assurance reviews and inspection of hardware. JWST project officials are concerned that decreased oversight could translate into the project increasing its use of cost and schedule reserves as they will not be conducting planned oversight to better ensure success. A recent NASA Office of Evaluation review concluded that by not having an adequate travel budget, the project is at risk of cost/schedule growth and/or technical risk due to the late identification of issues or timely resolution strategies. The project has made adjustments to absorb the reduction in fiscal year 2012 and plans to identify instances of increased cost or schedule risk due to late identification of issues. However, the project does not have a strategy to address anticipated future reductions. Ensuring adequate oversight is particularly important as the project begins its complex and lengthy test and integration phase, where issues will likely surface. As part of NASA’s approach to increase oversight of the project at headquarters, NASA’s Office of Evaluation recently conducted an independent review of the JWST project to assess the progress since the September 2011 rebaseline was approved. According to the Director of the Office of Evaluation, the goal of the review was not to reproduce the replan assessment, but rather to assess progress based on cost, schedule, and technical performance of the project and the status of oversight functions within NASA headquarters, the JWST Program Office, and Goddard Space Flight Center. The intended outcome of the review was 1) to obtain a snapshot of performance to determine if the program was progressing in accordance with its plan, and 2) to identify leading indicators for upper management to use when tracking future performance. The review team identified several areas of concern within the program, many of which we have highlighted, and recommended a list of leading indicators that project management should consider tracking. The Director of the Office of Evaluation said that the project is generally performing the activities and maintaining the schedule set forth in the replan; however, the team identified key areas that should be monitored as the project moves forward. The review team also recommended a set of leading indicators for project management to consider tracking to measure and monitor progress. The Director added that these indicators are for the project to use and would not be specific criteria for use by independent review boards such as the Standing Review Board. These indicators are a positive step to ensure that NASA management has the information necessary to monitor the progress of the JWST project. See table 3 below for the concerns raised by the review team. The new JWST project manager re-emphasized the importance of the project’s risk management system and, in August 2012, a new risk management database was implemented to support the system. The project manager told us that he evaluated the risk management system being utilized by the project when he assumed his position and found it to be ineffective and not robust, especially for a project as complex as JWST. While the basic risk management methodology remains unchanged, the project manager wanted a more regimented system. For example, the project utilizes a hierarchy of risk boards that periodically reviews and provides disposition of all new and existing risks. These risk boards reviewed and assessed new risks and lower level risk board actions and met on an ad hoc basis. The project manager instituted a more regimented system that re-emphasized and revised the weekly project risk board meetings. Lower level risk boards meet a minimum of once a month depending on activity. The project manager also determined that a new risk management database needed to be put in place that would bring more rigor to the risk management process. The project manager told us that he directed an overhaul of the risk management database to provide more complete information to management on the purpose and history for each risk. The goal was to improve consistency in how the project determined the potential for a risk to occur and its impact, and provide greater detail on mitigation and better tracking of the status for each risk. For example, the new system puts more emphasis on understanding and capturing the key events in the mitigation plan that are intended to result in a change in likelihood or consequence of a risk. The new system has a provision where the mitigation plan will be entered and updated over time, and the capability to store data such as mitigation steps throughout the life of the risk. In addition, the new system now archives data automatically to provide a traceable history of the risk. The prior data system did not have as robust of an archiving function. Furthermore, the project manager wanted to improve the linkage between the risk database entries and financial records to ensure consistency of the data in the risk database with regard to cost and schedule for risk mitigations with project office financial records. As the changes to the risk management system and database, as well as other changes we identified that were put in place to enhance oversight were just recently implemented, we will continue to monitor their continued use and assess the impact they may be having on the project. The JWST project is among the most challenging and high-risk projects NASA has pursued in recent years. It is also one of the most expensive, with a recent major replan resulting in a total cost of $8.8 billion. The reasons for cost and schedule growth were largely recognized by an independent review team to be rooted in ineffective funding, management, communication, and oversight. NASA has invested considerable time and resources replanning the project and instituting management and oversight improvements in order to ensure that it (1) can be executed within its new estimates and (2) has addressed the majority of issues raised in the recent independent review. It appears that communications with contractors and within NASA have improved, that a more robust risk mitigation system is in place, that more is known about what it will take to complete the project and how much it will cost, and that the project is currently meeting the majority of its milestones. Nevertheless, over the course of the next several years, the project will be executing a large amount of work with several extremely complex and challenging integration and test efforts. Because three major test and integration efforts must be completed in the last 2 years of the JWST schedule, it is essential that issues are identified and addressed early enough to be handled within the project’s current schedule. While the JWST oversight plan is consistent with NASA’s requirements for all project’s required reviews, a single independent review scheduled just over a year before launch may not be sufficient to identify and resolve problems early for a project of this magnitude. A key element of overseeing project progress is monitoring how the project is executing to its cost baseline. To that end, while NASA took some steps that were in line with best practices to develop its revised baseline, some of the deficiencies we found in its process could impact the reliability of the cost estimate and the joint cost and schedule confidence level that was provided to headquarters decision-makers. Without higher-fidelity, regularly updated information related to costs, as well as an oversight regime during later phases of test and integration that is commensurate with the complexity of that effort, NASA risks late identification of technical and cost issues that could delay the launch of JWST and increase project costs beyond established baselines. Also important to oversight for the remainder of the project is the ability of officials to sustain improvements to communication with and oversight of contractors. Anticipated travel restrictions, however, could decrease the project team’s ability to sustain these actions. Without a plan to address such reductions in future years, the project could once again become susceptible to communication and oversight problems identified in earlier reviews, which could also have a detrimental impact on continued project performance. To ensure that the JWST life-cycle cost estimate conforms to best practices, GAO recommends that the NASA Administrator direct JWST officials to take the following three actions to provide high-fidelity cost information for monitoring project progress: improve cost estimate documentation and continually update it to reflect earned value management actual costs and record any reasons for variances, conduct a sensitivity analysis on the number of staff working on the program to determine how staff variations affect the cost estimate, and perform an updated integrated cost/schedule risk analysis, or joint cost and schedule confidence level analysis, using a schedule that meets best practices and includes enough detail so that risks can be appropriately mapped to activities and costs; historical, analogous data should be used to support the risk analysis. To ensure that technical risks and challenges are being effectively managed and that sufficient oversight is in place and can be sustained, GAO recommends that the NASA Administrator direct JWST officials to take the following three actions: conduct a separate independent review prior to the beginning of the OTIS and spacecraft integration and test efforts to allow the project’s independent standing review board the opportunity to evaluate the readiness of the project to move forward, given the lack of schedule flexibility once these efforts are under way, schedule the management review and approval to proceed to integration and test (key decision point D or KDP-D) prior to the start of observatory integration and test effort, and devise an effective, long-term plan for project office oversight of its contractors that takes into consideration the anticipated travel budget reductions. NASA provided written comments on a draft of this report. These comments are reprinted in appendix IV. NASA also provided technical comments, which were incorporated as appropriate. In responding to a draft of this report, NASA concurred with three recommendations and partially concurred with three other recommendations and commented on actions in process or planned in response. In some cases, these actions meet the intent and are responsive to issues we raise; however, some of the responses do not fully address the issues we raised in the report. NASA partially concurred with our recommendation to improve the cost estimate documentation of the JWST project, and to continually update it to reflect earned value management actual costs and record any reasons for variances between planned and actual costs. In response to this recommendation, NASA officials stated that the project currently receives earned value data from some of its contractors and performs monthly analysis of that data to understand the contractors’ estimates at completion, and then compares these numbers to similar figures independently assessed by the JWST project. NASA also highlighted its efforts to improve the agency’s documentation of the earned value variances and to extend the earned value management analysis to areas where it is not yet implemented, such as ground systems development at the Space Telescope Science Institute. In addition, NASA responded that its annual budget process generates a requirements-driven budget plan consistent with the rebaseline. NASA stated that this information is updated in several different documents that are provided to management and it does not plan to revise its JCL documentation developed during the replan. Despite these steps, we could not independently confirm that they were leading to an updated cost estimate, which is the basis of our recommendation. If the estimate is not updated, it will be difficult to analyze changes in project costs and collecting cost and technical data to support future estimates will be hindered. Furthermore, if not properly updated on a regular basis, the cost estimate cannot provide decision makers with accurate information for assessing alternative decisions. Without a documented comparison between the current estimate (updated with actual costs) and the old estimate, the cost estimator cannot determine the level of variance between the two estimates and cannot see how the project is changing over time. Therefore, we continue to believe NASA will be well served by following best practices and updating its cost estimate with current information and documenting reasons for any variances. We encourage the project to improve the cost estimate documentation and record any reasons for variances between planned and actual costs and we intend to review the documentation as a part of our ongoing review of the project. NASA officials partially concurred with our recommendation that the project conduct a sensitivity analysis on the number of staff working on the project to determine how staff variations affect the cost estimate. In its response, the agency stated that it believes it met the intent of this recommendation when staffing levels were determined in the 2011 JWST rebaseline based on programmatic experience from the accomplishment of similar activities. To accommodate the possibility of increased costs based on increased staffing hours, NASA reports that funded schedule reserve was built into the JWST rebaseline, in addition to unallocated future expenses being held at various levels of the organization. NASA believes that these reserves will be sufficient to cover increases for the duration of specific activities that result in increased staffing cost, and that an additional workforce sensitivity analysis is not warranted. NASA added that the joint cost and schedule confidence level analysis performed provided a de facto workforce sensitivity analysis and does not plan any further action. A joint cost and schedule confidence level analysis, however, is not the same as a sensitivity analysis wherein the sources of the workforce variation should be well documented and traceable. While we appreciate the steps NASA took to account for workforce variation, the JWST cost model does not show how staff levels increasing or decreasing over time affects cost. Furthermore, best practices call for a risk analysis to be conducted in conjunction with a sensitivity analysis, not to be a substitute for it. As a best practice, a sensitivity analysis should be included in all cost estimates because it examines the effects of changing assumptions and ground rules. Since uncertainty cannot be avoided, it is necessary to identify the cost elements that represent the most risk and, if possible, cost estimators should quantify the risk. Without performing a sensitivity analysis that reveals how the cost estimate is affected by a change in a single assumption, such as workforce size, the cost estimator will not fully understand which variable most affects the cost estimate. Therefore, we continue to believe that NASA should conduct a sensitivity analysis for the JWST project, given the large number of staff working on the program, to determine how staff variations positively or negatively affect the cost estimate rather than relying on schedule reserve and unallocated future expenses to offset any shortfall. NASA concurred with our recommendation to perform an updated integrated cost and schedule risk analysis using a schedule that meets best practices and includes enough detail so that risks can be appropriately mapped to activities and costs. In response to this recommendation, NASA stated that the agency is already using tools and a method to conduct programmatic assessments of projects after the baseline was established using the JCL methodology. While these may be good tools, the key point is the need to address shortcomings of the schedule that supports the baseline itself. For example, the lack of detail in the summary schedule used for the joint cost and schedule risk analysis prevented us from sufficiently understanding how risks were incorporated; therefore, we question the results of that analysis. Since the JCL was a key input to the decision process of approving the project’s new cost and schedule baseline estimates, we maintain that the JWST project should perform an updated JCL analysis using a schedule with sufficient detail to map risks to activities and costs. Doing so could help increase the reliability of the cost estimate and the confidence level of the JCL. Furthermore, risk management is a continuous process that constantly monitors a project’s health. Given that JWST is many years from launch and the risks that the project faces are likely to change, a risk analysis should be conducted periodically throughout the life of the project. NASA concurred with our recommendation to conduct a separate independent review prior to the beginning of the OTIS and spacecraft integration and test efforts. In response to this recommendation, NASA stated that it will request members of the independent JWST Standing Review Board participate in OTIS Pre-Environmental Review scheduled prior to the beginning of OTIS environmental testing. A member of the Standing Review Board will co-chair this review and report its findings to the NASA Associate Administrator, which is the practice of all Standing Review Board reviews. In addition, NASA plans to direct Northrop Grumman, the spacecraft developer, to add members of the Standing Review Board, as well as members of the Goddard Independent Review Team, to the spacecraft element integration readiness review and report their findings to the NASA Associate Administrator. We believe these actions meet the intent of our recommendation and will afford an independent evaluation of the readiness of the project to move forward with its major integration and test efforts. NASA partially concurred with our recommendation to schedule the management review and approval to proceed to integration and test (KDP-D) prior to the start of the observatory integration and test effort. In response to this recommendation, NASA stated that it will reduce the 3- month gap between the scheduled system integration review and the KDP-D review, which it believes will provide NASA management and the NASA Associate Administrator with the full independent assessment earlier than currently planned. While we agree that this change will move the review earlier than previously planned, based on its response, NASA still plans to hold the review after the observatory integration and test is already underway. Holding this review after the observatory integration and test effort is already underway does not meet agency policy and will lessen the impact of the review as it may be inadequate to ensure key technical and management issues are identified early enough to be addressed. KDP-D is the point in which management approval is given to transition to the test and integration phase. We reiterate our recommendation that NASA should hold this important key decision point prior to the beginning of this last major integration and test effort, as required by agency policy. NASA concurred with our recommendation to devise an effective, long- term plan for project office oversight of its contractors that takes into consideration the anticipated travel budget reductions. In response to this recommendation, NASA stated that it will develop a plan based on fiscal year 2013 travel allocations and will take into consideration anticipated travel budget reductions. In addition, NASA stated that the plan will enable the project to maintain oversight of JWST contractors and their ability to meet performance and delivery deadlines and work closely with the international partners. We believe such a plan will be critical to ensuring adequate oversight, which is particularly important as the project enters into the complex integration and test efforts where issues will likely surface. In addition, we agree with the concerns of project officials that the current efforts to increase communication and oversight may not be sustained if reductions to future travel budgets occur as anticipated. We encourage the project to complete this plan in a timely manner and intend to review it as a part of our ongoing assessment of the project’s oversight efforts. We will send copies of the report to NASA’s Administrator and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s web-site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to assess (1) the extent to which NASA’s revised cost and schedule estimates are reliable based on GAO best practices, (2) the major risks and technological challenges the James Webb Space Telescope (JWST) project faces, and (3) the extent to which the National Aeronautics and Space Administration (NASA) has improved the oversight of the JWST project. In assessing the project’s cost and schedule estimates, we performed various checks to determine that the provided data were reliable enough for our purposes. Where we discovered discrepancies, we clarified the data accordingly. Where applicable, we confirmed the accuracy of NASA-generated data with multiple sources within NASA. Northrop Grumman, the Space Telescope Science Institute, and the JWST program and project offices. After reviewing cost estimate documentation submitted by NASA and conducting numerous interviews with relevant sources within the project office, we calculated the assessment rating of each criteria within the four characteristics by assigning each individual assessment rating: Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Met = 5. We then took the average of the individual assessment ratings for the criteria to determine the overall rating for each of the four characteristics. The resulting average becomes the “Overall Assessment” as follows: Not Met = 1.0 to 1.4, Minimally Met = 1.5 to 2.4, Partially Met = 2.5 to 3.4, Substantially Met = 3.5 to 4.4, and Met = 4.5 to 5.0. We discussed the results of our assessments with officials within the program office at NASA headquarters and the project office at Goddard Space Flight Center. We supplemented the assessment of the revised 2011 cost estimate with an assessment of the summary schedule used for the JCL, which was a part of the project’s cost estimation process, and followed criteria laid out in the GAO schedule guide. These practices address whether the schedule (1) captured all activities; (2) sequenced all activities—that is, listed in the order in which they are to be carried out; (3) assigned resources to all activities; (4) established the duration of all activities; (5) integrated schedule activities horizontally and vertically, which identifies whether products and outcomes associated with other sequenced activities are arranged in the right order, and that varying levels of activities and supporting subactivities are also aligned properly; (6) established for all activities, the critical path, which is the longest continuous sequence of activities that is necessary to examine the effects of activities slipping in the schedule; (7) identified between activities float, which is the amount of time by which a predecessor activity can slip before the delay affects the program’s estimated finish date; (8) identified a level of confidence using a schedule risk analysis; and (9) was updated using logic and durations to determine dates. We also reviewed the inputs to the JCL model, the document outlining the methodology of the analysis that accompanied the electronic files, and interviewed cognizant project officials to discuss their use of the summary schedule. Because the project’s detailed integrated master schedule has not been finalized because of ongoing negotiations and contract modifications, we did not conduct a complete schedule analysis using the GAO schedule assessment guide. We plan to perform this assessment in a subsequent review of the JWST project. To assess the major short- and long-term risks and technological challenges facing the project, we reviewed the project’s risk list, monthly status reviews, and other documentation provided by projects and contractor officials. This information covered the risks, mitigation plans, and timelines for addressing risk and technological challenges. We also interviewed project officials for each major observatory subsystems to clarify information and to obtain additional information on risks and technological challenges. Further, we interviewed officials from the Jet Propulsion Laboratory, Northrop Grumman Aerospace Systems, Lockheed Martin Advanced Technology Company, Teledyne Imaging Sensors, the University of Arizona, and the Space Telescope Science Institute concerning risks and challenges on the subsystems, instruments, or components they were developing. We reviewed GAO’s prior work on NASA Large Scale Acquisitions, NASA Office of Inspector General reports, and NASA’s Space Flight Program and Project Management Requirements and Systems Engineering Processes and Requirements We compared NASA’s controls as outlined in these policy documents.agency policies with the project plan to assess the extent to which the JWST’s plan followed the intent of the policies with regard to independent oversight and management approval processes. To assess the extent to which NASA is performing enhanced oversight of the JWST project, we reviewed documentation from the Independent Comprehensive Review Panel and the project to determine actions taken by NASA in response to the panel’s recommendations. We interviewed project officials to understand the impact of these changes on the oversight processes for the project and communication between the project and its contractors. We also interviewed officials from the Jet Propulsion Laboratory, Northrop Grumman Aerospace Systems, Lockheed Martin Advanced Technology Company, Teledyne Imaging Sensors, the University of Arizona, and the Space Telescope Science Institute concerning project oversight of work they were performing and the effectiveness of oversight changes. In addition, we reviewed a presidential directive and Office of Management and Budget and project documentation and interviewed project officials concerning the reductions to travel budgets and their impact on project oversight activities. We interviewed the Director of NASA’s Office of Evaluation about a recent internal review of the JWST project and reviewed documentation from that review. We also reviewed documentation and interviewed project officials concerning the changes made to the project’s risk management system. Our work was performed primarily at NASA Headquarters in Washington, D.C., and Goddard Space Flight Center in Greenbelt, Maryland. We also visited Johnson Space Center in Houston, Texas, and the Jet Propulsion Laboratory in Pasadena, California. In addition, we met with representatives from Northrop Grumman Aerospace Systems, Lockheed Martin Advanced Technology Company, Teledyne Imaging Sensors, the University of Arizona, and the Space Telescope Science Institute. We conducted this performance audit from February 2012 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In determining that the National Aeronautics and Space Administration’s (NASA) processes for developing the James Webb Space Telescope (JWST) cost estimate do not fully comply with best practices, we evaluated the project’s cost estimation methods against our 2009 Cost Estimating and Assessment Guide. (See table 4.) We applied the following scale across the four categories of best practices: Not met: NASA provided no evidence that satisfies any portion of the criterion. Minimally met: NASA provided evidence that satisfies less than one- half of the criterion. Partially met: NASA provided evidence that satisfies about one-half of the criterion. Substantially met: NASA provided evidence that satisfies more than one-half of the criterion. Met: NASA provided complete evidence that satisfies the entire criterion. In addition to the contact named above, Shelby S. Oakley, Assistant Director; Karen Richey, Assistant Director; Richard A. Cederholm; Laura Greifner; Cheryl M. Harris; David Hulett; Jason Lee; Kenneth E. Patton; Sylvia Schatz; Stacey Steele; Roxanna T. Sun; Jay Tallon; and Jade A. Winfree made key contributions to this report.
JWST is one of NASA's most expensive and technologically advanced science projects, intended to advance understanding of the origin of the universe. In 2011, JWST was rebaselined with a life cycle cost estimate of $8.8 billion and a launch readiness date in October 2018--almost nine times the cost and more than a decade later than originally projected in 1999. Concern about the magnitude of JWST's cost increase and schedule delay and their effects on NASA's progress on other high-priority missions led conferees for the Consolidated and Further Continuing Appropriations Act, 2012, to direct GAO to report on the project. Specifically, GAO assessed (1) the extent to which NASA's revised cost and schedule estimates are reliable based on best practices, (2) the major risks and technological challenges JWST faces, and (3) the extent to which NASA has improved oversight of JWST. To do this, GAO compared NASA's revised cost and schedule estimates with best practice criteria, reviewed relevant contractor and NASA documents, and interviewed project and contractor officials. The National Aeronautics and Space Administration (NASA) has provided significantly more time and money to the James Webb Space Telescope (JWST) than previously planned and expressed high confidence in the project's new baselines. Its current cost estimate reflects some features of best practices for developing reliable and credible estimates. For example, the estimate substantially meets one of four cost characteristics--comprehensive--that GAO looks for in a reliable cost estimate, in part because all life cycle costs were included. The estimate, however, only partially met the other three characteristics--well documented, accurate, and credible--which detracts from its reliability. For example, the estimate's accuracy, and therefore the confidence level assigned to the estimate, was lessened by the summary schedule used for the joint cost and schedule risk analysis because it did not provide enough detail to determine how risks were applied to critical project activities. The estimate's credibility was also lessened because officials did not perform a sensitivity analysis that would have identified key drivers of costs, such as workforce size. Program officials believe that it would have been difficult to fully address all best practice characteristics. GAO believes there is time to improve the estimate and enhance the prospects for delivering the project according to plan. Project officials report that the JWST schedule has 14 months of reserve, which meets Goddard guidance for schedule reserve; however, only 7 of the 14 months are likely to be available for the last three of JWST's five complex integration and test efforts. GAO's prior work shows that the integration and test phases are where problems are commonly found and schedules tend to slip. Given that JWST has a challenging integration and test schedule, this could particularly be likely. The project has made some significant progress in the past year, notably successfully completing development of the 18 primary mirror segments--considered JWST's top technical risk. Nevertheless, ongoing challenges are indicative of the kinds of issues that can require significant effort to address. For example, instrument challenges have delayed the first integration and test effort. In addition, key long-term risks on subsystems with a significant amount of work remaining will not be retired until 2016. Currently, NASA's plan for project oversight calls for one independent mission-level system integration review about 13 months before launch. While this is consistent with what NASA requires for its projects, this approach may not be sufficient for a project as complex as JWST. JWST has taken several steps to improve communications and oversight of the project and its contractors--such as taking over responsibility for mission systems engineering from the prime contractor; instituting meetings that include various levels of NASA, contractor, and subcontractor management; and implementing a new risk management system to allow for better tracking of risks. The enhancements to the oversight of the project are steps in the right direction, but it will take time to assess their effectiveness and ensure that the efforts are sustained by the project in the future. Reductions in travel budgets, however, could require the project to adjust the oversight approach that was adopted as a result of the replan. Additional reductions in travel budgets are anticipated in future years, but officials do not have a plan to address such reductions and their potential impact on continuing the current oversight approach. GAO recommends NASA take six actions including, among others, to take steps to improve its cost estimate; to conduct an additional, earlier independent review of test and integration activities; and to develop a long-term oversight plan that anticipates planned travel budget reductions.
Taxpayers’ experience depends heavily on IRS’s performance during the tax filing season, roughly mid-January through mid-April. During this period, millions of taxpayers who are trying to fulfill their tax obligations contact IRS over the phone, face-to-face, and via the Internet to obtain answers to tax law questions and information about their tax accounts. This period is also when IRS processes the bulk of the approximate 140 million returns it will receive, runs initial compliance screens, and issues over 100 million refunds. In recent years, IRS has improved its returns processing but has seen its taxpayer service performance deteriorate. For years we have reported that electronic filing (e-filing) has many benefits for taxpayers, such as higher accuracy rates and faster refunds compared to filing on paper. So far in 2012, the percentage of e-filed returns has increased by 1.9 percentage points to 88.8 percent since about the same time last year (a 2.2 percent increase), as table 1 shows. Since the same time in 2007, the percentage of e-filed returns has increased from 72.3 percent to 88.8 percent. This year, IRS may meet its long-held goal of having 80 percent of individual tax returns e-filed. However, the overall e-file percentage is likely to decline as the tax filing season ends since IRS typically receives more returns filed on paper later in the filing season. In addition, IRS is in the midst of a multi-phase modernization project, known to as the Customer Account Data Engine (CADE) 2, which will fundamentally change how it processes returns. With CADE 2, IRS also expects to be able to issue refunds in 4 business days for direct deposit and 6 business days for paper checks after IRS processes the return and posts the return data to the taxpayer’s account. Early in the 2012 filing season, IRS experienced two processing problems that delayed refunds to millions of taxpayers, and reported the problems had been resolved by mid-February. We summarized these problems in an interim report on the 2012 filing season. Providing good taxpayer service is important because, without it, taxpayers may not be able to obtain necessary and accurate information they need to comply with tax laws. In addition, more and more, taxpayers are relying on IRS’s website to obtain information and execute transactions, making it important that IRS have a modern website. However, as we have reported, IRS has experienced declines in performance in selected taxpayer service areas, most notably with respect to providing live telephone assistance and timely responses to taxpayers’ correspondence.from IRS to paper correspondence or have access to information online, they call IRS, correspond again, or seek face-to-face assistance—all of which are costly to IRS and burdensome to the taxpayer. Table 2 shows the declines in telephone service and paper correspondence and the goals for 2012 and 2013. Additional performance data is shown in appendix I. To improve the taxpayer experience and voluntary compliance, IRS has a range of options. Some of its options could provide taxpayers with better information to accurately fulfill their tax obligations. Other options would allow IRS to take enforcement actions sooner and with less burden on taxpayers. Simplifying the tax code could reduce unintentional errors and make intentional tax evasion harder. GAO-12-176. develop an online locator tool listing volunteer tax preparation sites— and IRS introduced an enhanced volunteer site locator tool in 2012;complete an Internet strategy that provides a justification for online self-service tools as IRS expands its capacity to introduce such tools. In addition to actions we recommended, IRS is also studying ways to better communicate with taxpayers and determine which self-service tools would be the most beneficial to taxpayers. According to IRS officials, the study should be completed later this year. Identifying more efficient ways to provide service also benefits IRS because it is able to make better use of scarce resources. GAO-12-176. most effective for improving the quality of tax returns prepared by different types of paid preparers. Likewise, IRS has discussed how to measure the effect of the requirements such as requiring continuing education and testing on tax return accuracy. It will take years to implement the approach as it will likely evolve over time and become more detailed. Tax preparation software is another critical part of tax administration. Almost 30 percent of taxpayers use such software to prepare their returns and, in the process, understand their tax obligations, learn about tax law changes, and get questions answered. Many also electronically file through their software provider. Consequently, tax software companies are another important intermediary between taxpayers and IRS. We have reported that IRS has made considerable progress in working with tax software companies to provide, for example, clearer information about why an e-filed return was not accepted, require additional information on returns to allow for IRS to better identify the software used, and enhance security requirements for e-filing. To illustrate the potential for leveraging tax software companies to improve taxpayer compliance, 4 years ago we recommended and IRS agreed to expand outreach efforts to external stakeholders and include software companies as part of an effort to reduce common types of misreporting related to rental real estate. In another report, we discussed the value of research to better understand how tax software influences compliance. IRS has volunteer partners, often nonprofit organizations or universities, that staff over 12,000 volunteer sites. Volunteers at these sites prepare several million tax returns for traditionally underserved taxpayers, including the elderly, low-income, disabled, and those with limited English proficiency. In recent reports we have made recommendations about estimating of the effectiveness of targeting underserved populations at such sites and making it easier for taxpayers to find the locations of nearby sites.to work with these volunteer partners to help improve assistance to taxpayers with the goal of improving compliance. Information reporting is a proven tool that reduces tax evasion, reduces taxpayer burden, and helps taxpayers voluntarily comply. This is, in part, because taxpayers have more accurate information to complete their returns and do not have to keep records themselves. In addition, IRS research shows that when taxpayers know that IRS is receiving data from third parties, they are more likely to correctly report the income or expenses to IRS. As part of its recent update of its tax gap estimates, IRS estimated that income subject to substantial information reporting, such as pension, dividend, interest, unemployment, and Social Security income, was misreported at an 8 percent rate compared to a 56 percent misreporting rate for income with little or no information reporting, such as sole proprietor, rent, and royalty income. GAO, Tax Administration: Costs and Uses of Third-Party Information Returns, GAO-08-266 (Washington, D.C.: Nov. 20, 2007). have not entered into an agreement with IRS to report details on U.S. account holders to IRS. As these three sets of information reporting requirements have only recently taken effect, it is too soon to tell the impact they are having on taxpayer compliance. We have made recommendations or suggested possible legislative changes in several other areas in which IRS could benefit from additional information reporting. They include the following: Service payments made by landlords. Taxpayers who rent out real estate are required to report to IRS expense payments for certain services, such as payments for property repairs, only if their rental activity is considered a trade or business. However, the law does not clearly spell out how to determine when rental real estate activity is considered a trade or business. Service payments to corporations. Currently, businesses must report to IRS payments for services they make to unincorporated persons or businesses, but payments to corporations generally do not have to be reported. Broader requirements for these two forms of information reporting, covering goods in addition to services, were enacted into law in 2010, but later repealed. We believe the more narrow extensions of information reporting to include services, but not goods, remain important options for improving compliance. Additionally, we have identified existing information reporting requirements that could be enhanced. Examples include the following: Mortgage interest and rental real estate. We recommended requiring information return providers to report the address of a property securing a mortgage, mortgage balances, and an indicator of whether the mortgage is for a current year refinancing when filing mortgage interest statements (Form 1098) could help taxpayers comply with and IRS enforce rules associated with the mortgage interest deduction. We have reported that collecting the address of the secured property on Form 1098 would help taxpayers better understand and IRS better enforce requirements for reporting income from rental real estate. Higher education expenses. Eligible educational institutions are currently required to report information on qualified tuition and related expenses for higher education so that taxpayers can determine the amount of educational tax benefits they can claim. However, the reporting does not always separate eligible from ineligible expenses. We recommended revising the information reporting form could improve the usefulness of reported information. Identifying additional third-party reporting opportunities is challenging. Considerations include whether third parties exist that have accurate information available in a timely manner, the burden of reporting, and whether IRS can enforce the reporting requirement. We have noted, for example, that the reason there is little third-party reporting on sole proprietor expenses is because of the difficulty of identifying third parties that could report on expense like the business use of cars. Modernized systems should better position IRS to conduct more accurate and faster compliance checks, which benefits taxpayers by detecting errors before interest and penalties accrue. In addition, modernized systems should result in more up-to-date account information, faster refunds, and other benefits, such as clearer notices so that taxpayers can better understand why a return was not accepted by IRS. Two new, modernized systems IRS is implementing include the following: Customer Account Data Engine (CADE) 2. For the 2012 filing season, IRS implemented the first of three phases to introduce modernized tax return processing systems. Specifically, IRS introduced a modernized taxpayer account database, called CADE 2, and moved the processing of individual taxpayer accounts from a weekly to a daily processing cycle. IRS expects that completing this first phase will provide taxpayers with benefits such as faster refunds and notices and updated account information. IRS initially expected to implement phase two of CADE 2 implementation by 2014. However, IRS reported that it did not receive funding in fiscal year 2011 that would have allowed it to meet the 2014 time frame. Modernized e-File (MeF). IRS is in the final stages of retiring its legacy e-file system, which preparers and others use to transmit e- filed returns to IRS, and replacing it with MeF. Early in the 2012 filing season, IRS experienced problems transferring data from MeF to other IRS systems. IRS officials said that they solved the problem in early February. IRS officials recently reiterated their intention to turn off the legacy e-file in October 2012 as planned. However, more recently, IRS processing officials told us they would reevaluate the situation after the 2012 filing season. MeF’s benefits include allowing taxpayers to provide additional documentation via portable document files (PDF), as opposed to filing on paper. In addition, MeF should generate clearer notices to taxpayers when a return is rejected by IRS compared to the legacy e-file system. The Commissioner of Internal Revenue has talked about a long-term vision to increase pre-refund compliance checks before refunds are sent to taxpayers. As previously noted, early error correction can benefit taxpayers by preventing interest and penalties from accumulating. In one example, IRS is exploring a process where third parties would send information returns to IRS earlier so they could be matched against taxpayers’ returns when the taxpayer files the return as opposed to the current requirement that some information returns go to taxpayers before being sent to IRS. The intent is to allow IRS to match those information returns to tax returns during the filing season rather than after refunds have been issued. Another option for expanding pre-refund compliance checks is additional math error authority (MEA) that Congress would need to grant IRS through statute. MEA allows IRS to correct calculation errors and check for obvious noncompliance, such as claims above income and credit limits. Despite its name, MEA encompasses much more than simple arithmetic errors. It also includes, for instance, identifying incorrect Social Security numbers or missing forms. The errors being corrected can either be in the taxpayers’ favor or result in additional tax being owed. MEA is less intrusive and burdensome to taxpayers than audits and reduces costs to IRS. It also generally allows taxpayers who make errors on their returns to receive refunds faster than if they are audited. This is due, in part, to the fact that IRS does not have to follow its standard deficiency procedures when using MEA—it must only notify the taxpayer that the assessment has been made and provide an explanation of the error. Taxpayers have 60 days after the notice is sent to request an abatement. Although IRS has MEA to correct certain errors on a case-by-case basis, it does not have broad authority to do so. In 2010, we suggested that Congress consider broadening IRS’s MEA with appropriate safeguards against the misuse of that authority. In the absence of broader MEA, we have identified specific cases where IRS could benefit from additional MEA that have yet to be enacted. These include authority to: use prior years’ tax return information to ensure that taxpayers do not improperly claim credits or deductions in excess of applicable lifetime limits, use prior years’ tax return information to automatically verify taxpayers’ compliance with the number of years the Hope credit can be claimed, and identify and correct returns with ineligible (1) individual retirement account (IRA) “catch-up” contributions and (2) contributions to traditional IRAs from taxpayers over age 70½. In 2009, Congress enacted our suggestion that IRS use MEA to ensure that taxpayers do not improperly claim the First-Time Homebuyer Credit in multiple years, which we estimate resulted in savings of about $95 million. Tax code complexity can make it difficult for taxpayers to voluntarily comply. Efforts to simplify or reform the tax code may help reduce burdensome record keeping requirements for taxpayers and make it easier for individuals and businesses to understand and voluntarily comply with their tax obligations. For example, eliminating or combining tax expenditures, such as exemptions, deductions, and credits, could help taxpayers reduce unintentional errors and limit opportunities for tax evasion. Frequent changes in the tax code also reduce its stability, making tax planning more difficult and increasing uncertainty about future tax liabilities. Limiting the frequency of changes to the tax code could also help reduce calls to IRS with questions about the changes. We have reported that IRS annually receives millions of calls about tax law changes. Reducing complexity in the tax code could take a variety of forms, ranging from comprehensive tax reform to a more incremental approach focusing on specific tax provisions. Policymakers may find it useful to compare any proposed changes to the tax code based on a set of widely accepted criteria for assessing alternative tax proposals. These criteria include the equity, or fairness, of the tax system; the economic efficiency, or neutrality, of the system; and the simplicity, transparency, and administrability of the system. These criteria can sometimes conflict, and the weight one places on each criterion will vary among individuals. Our publication Understanding the Tax Reform Debate: Background, Criteria, & Questions may be useful in guiding policymakers as they consider tax reform proposals. In closing, improving the taxpayer experience and increasing voluntary compliance will not be achieved through a single solution. Because voluntary compliance is influenced by so many factors, multiple approaches, such as those listed here, will be needed. Chairman Baucus, Ranking Member Hatch, and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you and Members of the Committee may have at this time. For further information regarding this testimony, please contact James R. White, Director, Strategic Issues, at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Joanna Stamatiades, Assistant Director; LaKeshia Allen; David Fox; Tom Gilbert; Kirsten Lauber; Sabrina Streagle, and Weifei Zheng. As shown in table 3, in recent years, the level of access to telephone assistors has declined and average wait time has increased. In addition, the volume of overage correspondence has steadily increased. On a positive note, tax law and account accuracy remains high. As shown in table 4, access to IRS assistors has declined over the last few years. IRS officials attribute the higher-than-planned level of service so far this year to a slight decline in the demand for live assistance. At the same time, the number of automated calls has significantly increased which IRS officials attributed in part to taxpayers calling about refunds, and requesting transcripts (i.e., a copy of their tax return information).
The U.S. tax system depends on taxpayers calculating their tax liability, filing their tax return, and paying what they owe on time—what is often referred to as voluntary compliance. Voluntary compliance depends on a number of factors, including the quality of IRS’s assistance to taxpayers, knowledge that its enforcement programs are effective, and a belief that the tax system is fair and other people are paying their share of taxes. Voluntary compliance is also influenced by other parties, including paid tax return preparers, tax software companies, and information return filers (employers, financial institutions, and others who report income or expense information about taxpayers to IRS). For this testimony, GAO was asked to (1) evaluate the current state of IRS’s performance and its effect on the taxpayer experience, and (2) identify opportunities to improve the taxpayer experience and voluntary compliance. This testimony is based on prior GAO reports and recommendations. Additionally, GAO analyzed IRS data in delivering selected taxpayer services in recent years. The Internal Revenue Service (IRS) has made improvements in processing tax returns, and electronic filing (e-filing), which provides benefits to taxpayers including faster refunds, continues to increase. However, IRS’s performance in providing service over the phone and responding to paper correspondence has declined in recent years. For 2012, as with previous years, IRS officials attribute the lower performance to other funding priorities. The following are among the opportunities to improve the taxpayer experience and increase voluntary compliance that GAO identifies in this testimony: IRS can provide more self-service tools to give taxpayers better access to information. IRS can create an automated telephone line for amended returns (a source of high call volume) and complete an online services strategy that provides justification for adding new self-service tools online. Better leveraging of third parties could provide taxpayers with other avenues to receive service. Paid preparers and tax software providers combine to prepare about 90 percent of tax returns. IRS is making progress implementing new regulation of paid preparers. As it develops better data, IRS should be able to test strategies for improving the quality of tax return preparation by paid preparers. Similarly, IRS may also be able to leverage tax software companies. Expanded information reporting could reduce taxpayer burden and improve accuracy. Expanded information reporting, such as the recent requirements for banks and others to report businesses’ credit card receipts to IRS, can reduce taxpayers’ record keeping and give IRS another tool. Implementing modernized systems should provide faster refunds and account updates. Modernized systems should allow IRS to conduct more accurate and faster compliance checks, which benefits taxpayers by detecting errors before interest and penalties accrue. Expanding pre-refund compliance checks could result in more efficient error correction. Expanding such checks could reduce the burden of audits on taxpayers and their costs to IRS. Reducing tax complexity could ease taxpayer burden and make it easier to comply. Simplifying the tax code could reduce unintentional errors and make intentional tax evasion easier to detect. GAO has made numerous prior recommendations that could help improve the taxpayer experience. Congress and IRS have acted on some recommendations, while others are reflected in the strategies presented in this testimony.
The expansion of world trade and investment has led to the increasing integration of the world economy in recent decades—a process often referred to as “globalization.” Total trade in developing countries, exports and imports, rose from less than $1.5 trillion in 1990 to $3.8 trillion in 2002, while foreign direct investment in developing counties grew even faster during this period, from $22 billion to $154 billion. Some view globalization as fostering economic growth, increasing employment, and improving living standards in both developed and developing nations. At the same time, others view globalization as resulting in negative social impacts and raise concerns about the expanding activities of multinational corporations, particularly in developing countries. U.S. multinational corporations are now faced with difficult issues, such as the treatment and conditions of foreign workers in corporate supply chains, environmental and health issues associated with production in diverse local communities, and human rights issues associated with authoritarian governments in countries where multinationals operate. In addition, some negative incidents involving U.S.-based companies have been widely publicized, hurting their own and the United States’ image, such as the use of sweatshops in the manufacture of clothing and other products. In another example, a U.S.-based company recently came under allegations that its overseas mining operations produced toxic waste that have caused illnesses. U.S. corporations are increasingly building operations or buying products from sources in developing countries. However, the legal, regulatory and ethical environments in which U.S. businesses and their suppliers operate vary across countries. For example, some have asserted that developing countries have inadequate or poorly enforced environmental and labor laws. Given the limited capacity of some developing countries, CSR advocates argue that corporations themselves must establish and maintain codes of conduct regarding operating standards in these environments. Companies face increasing pressure from nongovernmental organizations (NGO), the media, “socially responsible” investor groups, and other stakeholders to adhere to high standards globally in their own operations and throughout their supply chains. In addition, some members of Congress have shown support for CSR-related policies, similar to those advocated by working groups convened by the Kenan Institute. In response to these business challenges and outside pressures, companies are increasingly adopting “corporate social responsibility” programs. For example, recently U.S. electronics companies signed a joint code of conduct to protect working conditions, workers’ rights, and the environment in the electronics industry supply chain. A number of U.S. companies have instituted programs to address HIV/AIDS and other diseases in their operations in developing countries, for example, by raising awareness or providing access to treatment. Most recently, U.S. companies provided nearly $453 million to relief efforts in the wake of the tsunami that hit South and Southeast Asia and East Africa in December 2004. Despite these efforts, some CSR advocates call for more government action to promote CSR, with some noting that several national governments in Europe have put in place mechanisms to encourage or require the adoption of CSR practices. Global CSR is an umbrella concept that can best be understood by describing the different definitions used for the term, the actions businesses take to practice CSR, and the roles of key players involved in CSR. Although groups use different definitions and terms, CSR generally involves business efforts to address a broad range of issues, including the environment, labor, and human rights. Businesses perform many different actions to address CSR concerns. The extent and type of these actions are influenced by key players in CSR that include not only businesses, but also the civil society, investor groups, multilateral organizations and governments that seek to influence them. The term “global CSR” is sometimes used to refer to business efforts to address the social impacts of business in the global economy. Discussions of global CSR in the context of developing countries focuses on the need for business to address the gaps from inadequate or poorly enforced laws to protect the environment, labor, human rights, and other social resources. The term “CSR” is an umbrella concept with many different definitions. However, most definitions suggest that, in addition to addressing the interests of its shareholders, business should address the interests of its other stakeholders, including customers, employees, suppliers, and the local community. CSR definitions cover a broad range of potential social concerns, including business ethics, community development, labor, environment, and human rights. Table 1 presents sample CSR definitions. CSR definitions vary on whether CSR is considered exclusively voluntary or whether it includes mandatory requirements for business regarding social and environmental issues. Some definitions of CSR limit it to voluntary business decisions and actions, above and beyond what is required by law. Others organizations have reasoned that CSR should include mandatory efforts, especially because in developing countries it can be a tool to encourage compliance with laws and regulations. This voluntary compliance with laws and regulations assumes a greater role in developing countries, because even where developing countries have adequate laws and regulations concerning social and environmental concerns, they often have limited enforcement resources. Some groups prefer other terms to address all or some of the ethical, social, and environmental issues addressed by CSR. For example, one business group preferred the term “corporate citizenship” because business social and environmental efforts are indicative of business’s effort to be good citizens, while they believe the term “CSR” implies that those efforts are a responsibility rather than voluntary. Others prefer the terms “sustainable development” or “triple bottom line,” reasoning that business decisions and performance should be evaluated in terms of their economic, social, and environmental impacts. Other terms such as “business ethics” deal with one of the many concerns of CSR. Table 2 presents definitions of some terms related to CSR. U.S. businesses conduct many different types of actions that address CSR concerns that range from voluntary, such as philanthropic donations, to government mandated, such as disclosure of significant environmental conditions. These actions may or may not be part of a formal CSR effort. Although groups categorize business actions addressing CSR concerns differently, they can broadly be grouped as relating to (1) business ethics, (2) community development, (3) environment, (4) governance, (5) human rights, (6) marketplace, and (7) workplace. In our discussions with representatives of U.S. corporations, which are noted as leaders in CSR, we identified illustrative examples of U.S. companies’ actions that address these categories of CSR concerns. Business actions addressing the CSR concern of business ethics involve values such as fairness, honesty, trust and compliance, internal rules, and legal requirements. Among the actions taken to address business ethics are incorporating ethics into corporate value and mission statements, developing ethics codes, conducting ethics training, and monitoring ethical performance. In one example from the companies we interviewed, the company had recently trained its workforce—including all levels of management—on its standards of business conduct and now publishes these standards in 20 languages. Business actions addressing the CSR concern of community development involve business policies and practices intended to benefit the business and the community economically, particularly for low-income and underserved communities. Community development activities include employing and training disadvantaged workers, partnering with minority- and women- owned businesses, and locating facilities in underserved communities. One business we interviewed with a factory in South Africa works with its employees to develop the physical structures of schools for youth and adults in that community. Business actions addressing the CSR concern of the environment involve company policies and procedures to ensure the environmental soundness of its operations, products, and facilities. Examples include pollution prevention, energy efficiency, and supply-chain environmental management. One company we interviewed stated that it strives to exceed minimum U.S. government standards for toxic emissions, even in foreign countries. The company stated that it had sent a team of specialists to Mexico to bring a Mexican facility to the U.S. standard. Business actions addressing the CSR concern of corporate governance involve the broad range of policies and practices that boards of directors use to manage themselves and fulfill their responsibilities to investors and other stakeholders. Examples include developing processes for communication with stakeholders, adopting formal board guidelines, and implementing board and Chief Executive Officer (CEO) performance evaluations. Business actions addressing the CSR concern of human rights involve assuring basic standards of treatment to all people, regardless of nationality, gender, race, economic status, or religion. Among the concerns in developing human rights policies are to avoid child labor in manufacturing, government action depriving citizens of basic civil liberties, and forced or prison labor. For example, a company we interviewed said it had signed the United Nations Global Compact, which requires businesses to comply with human rights requirements as one of its 10 principles. Business actions addressing CSR marketplace concerns involve business relationships with its customers and such issues as product manufacturing and integrity; product disclosures and labeling; and marketing, advertising, and distribution practices. Marketplace-related actions include establishing ethical marketing and advertising policies, ensuring safety and efficacy of products, and employing ethical sales tactics. One company we interviewed that views water, health, and hygiene as their business stated it had developed low-cost water purifying systems and products to save water in hand washing and improve the lives of consumers in developing countries. Business actions addressing CSR workplace concerns generally involve human resource policies that directly impact employees, such as compensation and benefits, career development, and health and wellness issues. Examples of workplace CSR actions include adoption of global workplace standards, involvement of employees in business decisions, and establishment of employee grievance policies and procedures. Businesses play the central role in determining their efforts to address CSR concerns, but these efforts can also be influenced by the actions of civil society, investor groups, multilateral organizations, and government. Businesses play a central role in CSR by determining which social and environmental issues are addressed and how they are addressed. CSR literature notes that there is a growing recognition by businesses that CSR includes the way the company runs its core business, not just its philanthropic activities. Businesses can further influence CSR in their relationships with other firms through business networks, intermediaries, and supply chains. For example, a business may require or promote CSR among its business partners. Available but not necessarily representative data on U.S. business efforts to address CSR concerns suggests that many firms conduct some CSR efforts and that a small number of firms hold themselves to more rigorous non financial reporting standards on social, economic and environmental information. A 2002 survey of U.S. firm involvement in sustainability (a closely related term to CSR) included responses from 140 U.S.-based firms that were likely among the most active U.S. companies in CSR. Three- quarters of responding firms reported practicing some form of sustainability. Large firms, defined as those having revenues over $25 billion annually, were more likely than smaller firms to issue sustainability reports, according to that same survey. Over half of the firms issuing a sustainability report indicated that they were following Global Reporting Initiative (GRI) guidelines. The GRI is an independent institution that disseminates globally applicable sustainable reporting guidelines for companies use in reporting on economic, environmental and social dimensions of their activities, products, and services. As of March 2005, 69 U.S. firms had registered to use the GRI guidelines for reporting CSR Issues. Similarly, 71 U.S. firms have signed onto the United Nations Global Compact. Signatories to the Global Compact voluntarily agree to support its 10 principles in areas of human rights, labor, environment, and anticorruption policies. Available information from some surveys suggest that business leaders address social issues for business as well as for other reasons, including consistency with their core operating values. Two recent surveys of business executives reported that businesses practiced corporate citizenship or sustainable business practices for a variety of reasons. The voluntary nature of these surveys makes it impossible to project to the universe of all firms. In the first survey, the majority of business respondents concurred with the statement that “good corporate citizenship helps the bottom line.” Similarly, the majority of the respondents to the second survey indicated “cost savings” as a reason for adopting sustainable business practices. The majority of firms responding to the first survey also indicated that their founding traditions and core organizational values of their companies dictate their commitment to corporate citizenship. Similarly, the second survey reported that the majority of responding firms indicated the CEO/Board commitment as a contributing reason for their sustainable business practices. Further, this survey reported that a number of respondents stated that one reason for adopting sustainable practices was because it was “the right thing to do.” Despite over 30 years of research, no consensus has been reached on the relationship between business social and financial performance. Numerous empirical research studies have attempted to determine whether those firms that engage in socially responsible practices also do well in terms of financial performance. A 1997 study that surveyed 25 years of research observes that, many studies find a negative relationship between these practices and financial performance, although the largest number of studies find a positive relationship. More recent studies also reach a range of conclusions with some finding a positive association, some finding at least a neutral association, and others finding no significant or a mildly negative relationship. A recent paper on the business justification for CSR concludes, “It has not yet been possible to make a strong, causal, quantitative link between CSR actions and financial indicators such as share price, stock-market value, return on assets and economic value added.” The difficulty in accurately measuring CSR benefits to business complicates any assessment of CSR. CSR literature, as well as discussions with CSR experts, indicates that it can be very difficult to assess the profitability of CSR actions because benefits may occur far into the future and involve intangibles such as enhanced brand and company image or other goodwill. Furthermore, the authors of a recent study suggest that the provision of CSR will vary across industries, products, and firms. For example, they argue that larger, more diversified firms, and those that produce more highly differentiated products, may be more likely to engage in CSR practices than smaller firms or those that produce in less differentiated markets. The authors further suggest that if a firm is successful in implementing a CSR action, competitors may adopt similar measures, and this may have the effect of eroding any profit advantage. As a result, they argue that there should be a neutral relationship between CSR activity and firm performance. CSR literature recognizes the impact of civil society on raising awareness of social issues among businesses. The World Bank defines civil society as the wide array of nongovernmental and not-for-profit organizations that express the interests and values of their members or others based on ethical, cultural, political, scientific, religious, or philanthropic considerations. Civil society organizations include community groups, nongovernmental organizations (NGO), labor unions, indigenous groups, charitable organizations, faith-based organizations, professional associations, and foundations. A recent report by the Kennedy School of Government notes that the growth in civil society is one of the drivers making CSR more mainstream. Civil society groups can serve to strengthen the links between CSR activities and business profits by increasing the transparency of corporate operations. For example, civil society activities exposing sweatshops or other questionable corporate activities can provide an incentive for firms to act in ways that would not damage their reputation. Further, civil society sometimes establish standards that business can use to signal compliance or to enhance their reputation with their customers and other stakeholders, potentially increasing profits and firm value. In 1997, the Council on Economic Priorities Accreditation Agency released its Social Accountability (SA) 8000, a voluntary standard to help companies monitor a variety of workplace concerns. The SA 8000 provides verification of corporate performance. The Coalition for Environmentally Responsible Economies (CERES) partnered with the United Nations Environmental Program (UNEP) to oversee the development of the GRI reporting guidelines in the late 1990’s. The Interfaith Center for Corporate Responsibility (ICCR), composed of over 275 religious institutions, published a guide to be used as a reference tool by companies to monitor policies in such areas as community development, environment, ethics, human rights and workplace issues. Investor groups such as mutual funds and pension plans are responsible for a growing proportion of U.S. investments and therefore, are a potentially increasing influence over business’s CSR actions. According to a report by the Social Investment Forum, a national membership organization of social investment practitioners and institutions, firms using some type of socially responsible investment strategy manage over 11 percent of all U.S. investment assets under professional management. The report further indicated that between 1995 and 2003 social-invested assets grew faster than all other types of professionally managed investment assets in the United States. CSR literature notes the increased activism of some institutional investors and their calls for increased corporate accountability and transparency. Multilateral organizations have played an active role in developing standards relating to CSR and in promoting the concept of CSR. The Organization for Economic Cooperation and Development (OECD) first published its guidelines for multinational enterprises in 1976. These guidelines include recommendations by OECD-member governments to multinational enterprises on appropriate business conduct in such areas as business ethics, labor relations, environmental practices, and information disclosure. The OECD revised the guidelines in 2000 to include a call for companies to respect human rights, abolish forced and child labor, and take a more active role in promoting environmental sustainability. The United Nations launched its Global Compact in 1999, and it now consists of 10 principles covering concerns with human rights, labor, environment, and anticorruption. The World Bank also has a number of program goals related to CSR, including supporting the development of environmental and social practices in individual businesses in emerging markets, working with national governments to help countries better understand and address CSR, and cosponsoring (with the OECD) the Global Corporate Governance Forum, which helps countries improve standards of governance for their corporations. The Role of Governments in CSR A 2002 World Bank study identified four major CSR roles for government: endorsing, facilitating, partnering, and mandating. Government endorsement of CSR can take a variety of forms, including direct recognition of businesses with awards. In their facilitating role, governments enable or provide incentives to companies to engage in CSR to obtain social and environmental improvements. Government partners with the private sector and civil society in tackling complex social and environmental problems. In the mandating role, governments require minimum CSR-related actions in laws and regulations. Some industrialized countries have established programs to foster CSR. For example, in 2001, the European Commission published a green paper to launch debate on how the European Union could promote CSR. Subsequently, the commission held a forum to foster dialogue among the business community, trade unions, civil society organizations, and other stakeholders on CSR. In May 2001, France became the first country to require all publicly listed companies to report on the social and environmental consequences of their activities. In 2000, the United Kingdom appointed a Minister for Corporate Social Responsibility, who maintains a central Web site that highlights government departments with CSR responsibilities. Although the social and economic priorities vary among developing countries, the high incidence of poverty and weak civil society means there are often fewer conventional drivers for CSR. Most developing country governments seek foreign investment to help them grow and develop and must attempt to balance development with other social and environmental goals. A 2002 World Bank report notes that developing country governments do not often participate in the development of CSR policies and standards. Another report on public sector support for CSR among global supply chains states that the lack of resources for developing country governments, which do not view export sector workplaces as the highest priority for social and environmental intervention, hinders progress in addressing CSR-related issues in global supply chains. The effectiveness of government programs supporting CSR in achieving public policy goals has not been established, in part because of the difficulties inherent in such assessments. CSR literature notes that it is difficult to assess the impact of CSR-related partnerships on public policy goals because it is difficult to measure or compare their intangible inputs and outputs. Representatives from the four academic institutions we interviewed agreed that it was difficult to assess the impact of CSR on social goals. Several of these academicians also noted that they had not seen good work measuring the benefit of CSR to society. One noted that CSR is incremental and that it is hard to measure incremental improvements. While the federal government does not have a formal role in global corporate social responsibility, we identified over 50 programs, policies, and activities at 12 agencies that are related to global CSR using a data collection instrument completed by agency officials. We narrowed down the programs to those that were ongoing in fiscal year 2003 or afterwards, those that may affect U.S. corporations’ CSR efforts overseas, including their supply chains, and those that touch on key components of CSR, such as labor, environment, human rights, community development and corporate governance. As illustrated in the text below, most of these activities can be loosely categorized into the four key roles of governments in global CSR identified by the World Bank: endorsing, facilitating, partnering and mandating. Appendix II catalogs all the programs we identified by agency. There is no comprehensive legislation mandating a federal role in global corporate social responsibility, and few agencies actually define CSR. Many agencies work with the private sector on issues that are generally covered by the concept “corporate social responsibility,” such as labor, environment, human rights and corporate governance, but few agencies define corporate social responsibility or label their activities CSR. Some agencies noted that they use other terms, such as corporate stewardship or corporate citizenship, to refer to similar issues. While there is no law designating a lead agency to coordinate federal government activities related to global corporate social responsibility, United States agencies are currently in the initial stages of creating a Web site to catalogue federal CSR initiatives. This informal interagency initiative, led by staff at the Inter-American Foundation (IAF), initially involved the Department of State, USAID, the Department of Commerce, the Environmental Protection Agency (EPA), and the Overseas Private Investment Corporation (OPIC). The purpose of the initiative is to publicize the U.S. government programs and resources that promote good corporate practices or CSR to businesses and NGOs. The IAF expects to make the Web site publicly available sometime in 2005. Some agencies also reported that while they do not have a formal program focused on global corporate social responsibility, they have a number of initiatives that relate to global CSR. For example, officials at the Department of State, which had the greatest number of initiatives related to global CSR, told us that they house their CSR-related activities in several bureaus linked through informal coordination. Likewise, at the EPA, which also had a large number of related initiatives, an official told us that the agency does not have a specific CSR program, but acknowledged there were many links between EPA programs on the environment and the goals of CSR. Further, EPA recently completed an internal inventory of its voluntary initiatives that partner with corporations to improve coordination and policy consistency. Agency perspectives on global corporate social responsibility vary from active endorsement to reluctance to labeling their programs CSR. For example, several bureaus in the Department of State foster corporate CSR practices as a means to enhance their own efforts aimed at public diplomacy, protecting human rights, and other areas. Similarly, the Department of Commerce has officially endorsed corporate social responsibility, stating that American companies must follow the highest standards of conduct anywhere they do business and that American companies contribute to the communities in which they do business. Through good corporate governance and global corporate social responsibility, the Department of Commerce maintains that American companies are helping to spread democratic values and prosperity around the globe, which leads to greater economic freedom, higher standards of living, and greater social and political freedoms. However, other agencies do not want their programs to be labeled CSR because they do not see it as part of their mission or believe they lack authority to engage in CSR activities. For example, while officials from the Office of the U.S. Trade Representative acknowledged that the agency undertakes some activities that might complement CSR, they stated that the agency’s mission is to negotiate trade agreements, not to engage in CSR efforts. Similarly, a senior official at the Department of Labor said that, while the department has many activities that could conceivably be seen as supporting global CSR, the department is not doing them for that reason. He believes the department lacks specific authority to do work on CSR. Some agencies without a formal position on CSR actively take advantage of mutual interests between their missions and company CSR practices to achieve their broader mission goals. For example, USAID and the IAF leverage resources from corporations for development missions, and EPA intends to control pollution through voluntary programs with corporations. Specifically, USAID’s Global Development Alliance aims to achieve the agency’s development goals by leveraging resources from the private sector and other partners. USAID’s alliances address a range of issues, such as encouraging economic growth, developing businesses and workforces, addressing health and environmental problems, and expanding access to education and technology. To illustrate, USAID partnered with one U.S. corporation operating in post-war Angola to build up the country’s business sector and equip Angola’s workforce with necessary business skills. The company and USAID each agreed in 2002 to provide $10 million over 5 years for a series of projects to strengthen small and medium-sized businesses, including helping refugees and former soldiers to return to agriculture, developing an enterprise development bank, and supporting the creation of an agricultural training center. From fiscal years 2002 to 2004, USAID reported funding approximately 290 public-private alliances with over $1.1 billion in federal money and over $3.7 billion in partner contributions. Figure 1 illustrates how federal agency programs sometimes complement company CSR practices. Other agencies, such as OPIC, the Export-Import Bank of the United States (Ex-Im Bank), and the U.S. Securities and Exchange Commission (SEC) engage in activities that are related to CSR, generally in response to statutory or congressional requirements rather than based on a formal agency decision on CSR. Many of the programs we identified started in the last 5 years. For example, the Department of State’s Partnership to Eliminate Sweatshops Program started in 2000 to provide grants to address unacceptable working conditions in manufacturing facilities overseas that produce goods for the U.S. market. In fiscal year 2003, the program funded the development of a confidential database of factory monitoring reports that would be accessible by companies seeking compliance information on factories in their supply chains. The effort was in response to U.S. companies that have cited lack of information about factory compliance as an obstacle to improving their own compliance efforts and responsible behavior. Since 2001, several presidential initiatives aimed at foreign assistance have partnered with companies to achieve the initiative goals, which also complement corporate CSR practices. For example, one interagency presidential initiative led by the Department of Commerce, the Digital Freedom Initiative, was announced in 2003 to partner with U.S. businesses to transfer the benefits of information and communication technology to businesses in the developing world. The program has over 90 U.S. corporate and nonprofit organization partners that provide volunteers and other resources to support its activities. As part of the initiative, in Senegal, a U.S. information technology company is developing 12 academies to train Senegalese to install, manage, and maintain modern computer networks. Federal agency activities related to CSR focus on a range of countries and sectors. For example, the International Child Labor Program at the Department of Labor funds projects in Bangladesh, Pakistan, Central America, and West Africa that work with various industry associations to address the use of child labor. The Department of State funds a number of projects in China and other countries in various sectors, including the apparel industry and the extractives sector. Federal programs and activities assist U.S. companies with their philanthropic efforts, as well as with their efforts to be socially responsible in their core business operations, including their supply chains. None of the programs we identified were specifically designed to monitor company CSR activities. Most federal programs, policies, and activities related to CSR have small budgets and staffs. Many programs do not specifically track budget and staffing information for their CSR-related activities. Of the programs reporting budget and staffing information, most are relatively small. The Departments of Commerce and State and EPA, which identified the largest numbers of discrete initiatives related to CSR, reported relatively modest budgets and staffing for their initiatives. In total, only four programs reported budgets at or over $2 million in fiscal year 2003 for CSR-related activities. The two programs that reported the largest annual budgets of around $20 million and $30 million are at the Department of Labor and USAID, respectively. Similarly, many federal CSR efforts are staffed by agency officials with multiple responsibilities, working part time on the effort. Most U.S. government programs, policies, and activities related to global CSR can be loosely categorized into the World Bank’s four public sector roles: endorsing, facilitating, partnering, and mandating. These roles range from the least government involvement—endorsing companies’ voluntary efforts above and beyond compliance with laws and regulations—to the most government involvement through mandating behavior consistent with CSR. Although some federal efforts related to CSR can be classified as serving more than one role, roughly two-thirds of the U.S. government programs, policies, and activities, that we identified fell in the middle of the spectrum by either facilitating and/or partnering with companies on their voluntary CSR efforts. The remainder either fell into the mandating and endorsing roles, or outside the World Bank’s roles. Figure 2 illustrates the range of U.S. government activities in the World Bank framework. See appendix II for a complete listing and brief description of the 54 CSR-related programs and activities that we identified at 12 U.S. agencies. The U.S. government has a number of awards programs that endorse CSR by recognizing companies for socially responsible activities. U.S. officials also endorse the concept to audiences through public speeches on an ad hoc basis. Some examples of endorsing include: The Department of State’s annual Award for Corporate Excellence, which emphasizes the role U.S. businesses play to advance good corporate governance, best practices, and democratic values overseas. Since 1999, 12 businesses have received the Award for Corporate Excellence, following nominations submitted by Chiefs of Missions at U.S. Embassies and Consulates abroad. In fiscal year 2004, the Department of State received 50 award nominations from Chiefs of Missions. The EPA’s Climate Protection and Stratospheric Ozone Protection Awards, which encourage and recognize outstanding corporate environmental efforts in climate protection. For example, a 2002 corporate recipient of EPA’s Climate Protection Award reduced its energy use by over 30 percent internationally and offset all the remaining greenhouse gas emissions both in the United States and overseas. The U.S. government facilitates CSR by providing information, funding or incentives to companies and other players to engage in CSR-related issues. Some examples include: The Department of Commerce’s training on rule of law, human rights, and corporate stewardship for commercial service employees. The training helps these officers provide information on corporate stewardship issues to companies involved in the export promotion process. Additionally, commercial service officers can use this information in their work with overseas chambers of commerce. As of March 2005, 260 commercial service employees had received the training since the program’s inception in 2003. The Ex-Im Bank’s Environmental Exports Program, which began in 1993. The program enhances the Ex-Im Bank’s financing package for such U.S. goods and services, thereby encouraging foreign buyers to purchase U.S. exports that are beneficial to the environment. Specifically, the program extends loan repayment terms, finances the interest accrued during the disbursement period, and finances local costs to an amount equal to 15 percent of the contract price. Exports eligible for the program include renewable energy projects, water treatment projects, air quality monitoring instruments, equipment for waste collection and clean up, services for environmental assessments and ecological studies, and other projects that meet specified emission thresholds. During fiscal year 2003, Ex-Im Bank supported over $173 million of environmentally beneficial goods and services, including $13 million in products and technologies related to renewable energy. Several U.S. government programs partner with corporations or convene partnerships with key stakeholders, which can help companies accomplish their CSR initiatives. In addition to USAID’s Global Development Alliance discussed earlier, representative examples include: EPA’s Climate Leaders Program, which partners with companies to achieve EPA’s goal of protecting the environment. The Climate Leaders Program is a voluntary government partnership that enlists major U.S. companies to set an aggressive greenhouse gas reduction target. EPA established inventory protocols to assist the companies in tracking their success toward their greenhouse gas target. Partners receive training and technical assistance in completing the greenhouse gas inventories, and EPA works with each partner to develop standard Inventory Management Plans. EPA plans to provide recognition in later years after partners have met or exceeded their targets, which are publicly available on the EPA Web site. The Voluntary Principles on Security and Human Rights, which provide guidance to oil and mining companies on how to ensure respect for human rights in their security procedures. In 1999, together with the government of the United Kingdom, the Department of State convened international NGOs with U.S. and United Kingdom oil and mining companies concerning human rights abuses by hired security forces. A set of voluntary principles was developed through collaboration with the relevant stakeholders. According to a State Department official, nearly every major oil and mining company is now a participant in the Voluntary Principles process. While there is debate over whether complying with laws and regulations constitute CSR, a number of federal requirements and regulatory mechanisms that mandate social and environmental issues could fall under the CSR umbrella. Examples of regulations and agencies that require participating companies to comply with CSR-related requirements include: An SEC rule, which provides anyone who owns more than $2,000 in a company’s stock for more than 1 year with the opportunity to propose issues for shareholders to vote on. SEC ensures that companies do not exclude shareholder proposals for vote at annual company meetings, unless they meet the legal criteria for exclusion outlined in the rule. According to an investor group that tracks shareholder proposals, out of 1,052 shareholder proposals that were filed at U.S. companies for 2005 meetings, approximately 350 proposals focused on issues related to corporate social responsibility, such as global warming and global labor standards. The Overseas Private Investment Corporation (OPIC), which provides long-term financing and/or political risk insurance to U.S. companies investing in over 150 emerging markets and developing countries, requires that all beneficiary companies comply with certain CSR criteria. These requirements cover issues that include host country development impact, environmental protection, international labor rights, and human rights. The requirements are written into contracts, and OPIC specifies that they must be carried down to the subcontract level. In addition to the four roles discussed above, a number of U.S. programs foster a business environment conducive to CSR by working with other national governments to strengthen compliance and enforcement of social and environmental regulations in countries where U.S. companies operate. These efforts serve to protect U.S. businesses from competing with companies that are not complying with weakly enforced laws and regulations. Some examples include: The Department of Labor’s program on Protecting the Basic Rights of Workers, which works with host country ministries of labor to improve adherence to international core labor standards and acceptable conditions of work in developing countries. In accordance with a congressional appropriation, in fiscal year 2003 the office allocated $20 million for these efforts worldwide, including in a number of countries in Africa, the Americas, Asia, and in Ukraine. However, according to an agency official, the budget decreased significantly in subsequent years to $2.5 million in fiscal year 2004 and no funding in fiscal year 2005. EPA’s International Compliance Assurance Division, which works with governments to ensure compliance of companies with environmental standards. Since 2001, approximately 20 trainings have been held for officials from a wide range of countries, including South Africa, Nigeria, Indonesia, Vietnam, Brazil, Guatemala, and Egypt, among others. Our review of CSR literature revealed support for government involvement in CSR varied with views of CSR’s connection to business profit. Opinions of those we interviewed on the impact of existing federal agency efforts and the appropriate government role related to CSR generally revealed a desire for government involvement and the widest support for federal agency activities that assist businesses in their voluntary efforts. Based on our review of CSR literature, perspectives on the appropriate role of government in CSR vary, but generally correlate with three major perspectives on the connection of CSR to business profits: (1) free-market economic, (2) “business case,” and (3) social issues. Those with a free-market economic perspective generally view businesses engaging in CSR as a potential taking of profits from the business owners that will ultimately diminish the effectiveness of the business and a free- market economy. The well known economist, Milton Friedman refers to the doctrine of “social responsibility” as fundamentally subversive in a free society, stating, “there is one and only one social responsibility of business—to use it resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” According to this free-market economic perspective, business managers have a primary duty to maximize value for shareholders and in doing this businesses serve the general welfare by directing resources to produce goods and services society wants. In this view, engaging in CSR actions that are not based on profitability can affect not only business performance but also potentially reduce the general welfare of society. David Henderson, an economist who has written extensively questioning the value of CSR, recently wrote “The general adoption of CSR, in response to social pressures, would undermine the market economy and make businesses less effective in performance of their primary role.” While this free-market economic perspective recognizes that government has a role in structuring the legal framework of a market economy, those with this view do not support government involvement in the general adoption of the concept of CSR. Many CSR proponents cite a “business-case” perspective, in which business CSR efforts are supported based on their contribution to business profit and value. Those with the business-case perspective reason that businesses can undertake CSR actions that will increase businesses’ value or return on investment in terms of increased revenue, increased asset value, or reduced cost. Business leaders often indicate that their CSR practices help their bottom line. Supporters of the business-case perspective assert that addressing important social issues in the business environment can contribute to the long-term value of the firm. Supporters of this perspective have developed many different lists of potential benefits to a business in adopting CSR. For example, one discussion of the business case identified the following six potential business benefits: Operational cost savings—Investment in environmental efficiency measures such as waste reduction and energy efficiency can save money as well. Enhanced reputation—Good company performance in relation to sustainability issues can build reputation, while poor performance, when exposed, can damage brand value. Increased ability to recruit, develop, and retain staff—These can be direct results of introducing ‘family friendly’ policies. Also, volunteering programs may improve employee morale and loyalty to the company. Better relations with government—More favorable government relations and regulatory rulings are key for many companies looking to extend their business in politically unstable conditions. Anticipation and management of risk—Managing risk is increasingly complex in a global market environment. Greater oversight and stakeholder scrutiny of corporate activities makes managing risk key to company success. Learning and innovation—The interaction required with a wide range of individuals and organizations outside the traditional business relationships can encourage creativity, which can lead to increases in profitability. The benefits of CSR can also be viewed in a global context, with the interaction between multinational businesses and foreign host-country governments concerning issues of foreign direct investment and business operations in host countries, generally. Engaging in CSR practices may help the multinational business manage certain political and reputation risks in their operations, particularly with regard to host countries in the developing world. Negative publicity can seriously undermine the reputation of multinational business internationally, and it can create a political climate that may lead a host government to take actions, such as regulation or other restrictions, that can undermine the firm’s efficiency and profitability. In addition, some developing countries may not have adequate laws to address concerns about workers rights or the local environment, and even where they do, these countries may not have the resources, technical expertise, or the willingness to adequately enforce their laws and regulations. By demonstrating a commitment to good business practices, such as through CSR, multinational businesses may send a signal that they are committed to helping mitigate problems or issues that may arise regarding their operations, thus creating a more positive climate in which to pursue business opportunities. Those with a “business case” perspective view a major role of government as supporting business’s voluntary CSR-related efforts. Surveys of business leaders indicate that they believe that CSR should be completely voluntary. This perspective stresses business involvement in the development of CSR efforts because the business knows its resources and constraints and can best identify potential benefit to the business. Supporters of this perspective look for business to work with civil society and government to develop CSR approaches that address relevant social issues. Subscribers to this view see advantages of government working with business. For example, in a recent book Walking the Talk the Business Case for Sustainable Development, the authors state, “Governments too, have a vested interest in collaborating with companies. Governments are spending less time on command-and-control regulations and more on forms of cooperation with industry to produce workable, incentive based solutions. They are finding that historically intractable social and environmental problems, such as poverty, disease, and threats to biodiversity, can only be solved through partnership.” Those with a social issues perspective focus on the extent to which business addresses social issues, but opinions within this group are mixed on whether to rely on voluntary or mandatory CSR approaches. A 1999 survey of 25,000 consumers worldwide found that two-thirds of the population in countries surveyed indicated that “they want companies to go beyond their historical role of making a profit, paying taxes, employing people and obeying all laws; they want companies to contribute to broader societal goals as well.” Some supporters of the social issues perspective cite successes of some business voluntary CSR efforts in contributing to social issues. Some also call on business to voluntarily adopt CSR practices to address social issues beyond what might be justified by business profit. Such organizations see a role for government in fostering voluntary corporate CSR actions. Others with a social issues perspective take a very different view. They believe that business is primarily concerned with profit and thus should not be trusted to develop solutions for important social issues on their own. According to those with this view, business involvement in CSR efforts can become merely a branch of public relations instead of effectively addressing social problems. As a result, they feel that governments should move to mandate CSR. Several groups have argued for increased government engagement in CSR initiatives aimed at ensuring that business adhere to international norms. For example, one consumer group’s position paper on CSR calls on governments and international agencies to introduce legislation to set standards that transnational corporations must observe and also a framework for monitoring corporate behavior. Similarly, another group noted that there is a need for increased government engagement in CSR initiatives aimed at ensuring that businesses adhere to international norms because governments are the only actors with jurisdiction over the private sector. Another human rights NGO states that voluntary initiatives will often be ineffective and insufficient. This organization further states that more attention should be given to the role international law can play in anchoring these responsibilities in a legal framework that crosses national boundaries. In addition to reviewing the available literature, we also interviewed 32 individuals representing groups actively engaged in CSR to obtain their views on the appropriate role for the federal government and the impact of current federal activities on their CSR efforts. Specifically, we interviewed 14 companies, 4 business groups, 6 NGOs focused on environmental, human rights and labor issues, 4 investor groups, and 4 academic institutions (See app. I for a complete list of the respondents). A majority of respondents supported a government role in global CSR, yet views varied regarding the appropriate federal role and the impact of current activities. Most respondents generally supported government assistance with voluntary CSR efforts such as endorsing, facilitating, and partnering, while some also expressed an interest in government-mandated CSR, especially to increase disclosure of CSR-related information. Most respondents saw a need for the U.S. government to encourage foreign governments to enforce CSR standards to help level the playing field for U.S. companies adhering to high CSR standards. Some respondents based their discussion of the government role on their knowledge of current U.S. government activities related to global CSR, yet we found that several were unaware of these efforts. Also, some said they were aware of U.S. government efforts, but primarily cited domestic CSR efforts or initiatives that are not led by the U.S. government. Several respondents called for a greater U.S. government role in CSR, as in some other countries, and greater coordination of existing U.S. efforts. A number of respondents were aware of U.S. government award programs that endorse CSR, but had mixed reactions regarding their effectiveness. Whereas a majority of companies we interviewed who commented on awards said they have a positive impact, for example, by motivating employees and validating the company’s efforts, some were not motivated by awards. One company in favor of government endorsing CSR through awards said that, although there are a lot of awards given to companies for corporate social responsibility, an award from the U.S. government or another government is credible and valuable. However, another company said it receives so many awards that receiving one more is not very useful, unless it is accompanied by significant media attention. Most of the business groups reacted positively to federal government awards, stating that awards call attention to success stories and provide a signal of the type of behavior the government likes, help to motivate companies, and provide a positive counterbalance to regulations and compliance by rewarding voluntary efforts. Most of the NGOs that were aware of federal government awards for global CSR activities were skeptical of the impact of the awards, questioning the nominations and selection processes and whether the awards are a good indicator for companies’ CSR performance. The two investor groups that were aware of federal government awards programs thought they were a positive influence. In addition to awards, a few respondents also suggested the government should more actively endorse CSR in its own procurement processes and in government pension investments. Many respondents from the various groups expressed support for federal government efforts to facilitate CSR, especially through providing information. Representatives from companies and other groups suggested that the government could play a more active role in providing information on setting benchmarks in areas such as the environment and human rights, providing information on best practices and how to start CSR activities in other countries, or establishing a clearinghouse with CSR-related information. A few respondents suggested that providing information or assistance would be particularly helpful for small and medium-sized companies and companies just getting started with CSR. Many respondents viewed government partnerships with companies and efforts to convene stakeholders to accomplish CSR goals favorably and thought it was an appropriate role for the U.S. government. One company that has worked with USAID and the Centers for Disease Control on a health-related issue in Haiti said that the government is well placed to help companies focus on the needs of those living in poverty and that companies have a lot to contribute by helping to provide safe drinking water, fight HIV/AIDS, and improve education and economic welfare. Two NGOs that are aware of partnership programs had mixed reactions. For example, while one NGO said partnerships are helpful in bringing parties together and leveraging private sector resources, another NGO was concerned about potential conflicts of interest. Respondents from business groups, investor groups, and academic institutions who commented on federal efforts to partner with companies on CSR issues were generally positive about these partnerships. Many organizations supported a federal role in partnering by convening stakeholders to address specific CSR issues or to share information. For example, the Department of State’s involvement in developing the Voluntary Principles on Security and Human Rights was cited as an example of a positive effort by the U.S. government to convene stakeholders to address a CSR-related issue. Companies and business groups generally held mixed views regarding the impact of laws and regulations on company global CSR efforts, whereas NGOs and investor groups largely believed that laws have a positive impact on CSR. In general, these latter groups desired a government role in mandating CSR, especially to increase disclosure and transparency of company CSR activities. A few respondents cited the lack of U.S. legislation or involvement in CSR as an impediment to companies’ CSR efforts. While some companies were concerned about burdensome mandates, several said that certain existing regulations and government efforts create minimum standards and level the playing field internationally, which is helpful to companies with active CSR programs. According to one director of CSR, the company’s initial reaction to CSR requirements, such as import controls, is negative because they are costly and burdensome. However, the company recognizes that new rules can help level the playing field, as not all companies voluntarily adopt high standards. Another company said the Foreign Corrupt Practices Act has had a positive impact on the company’s CSR activities by enhancing the visibility of CSR and helping to raise standards of transparency and governance. Similarly, customs legislation that set minimum criteria allows the company to discuss CSR standards with its suppliers and ensures that it is not the only company focusing on these issues, which could create a competitive disadvantage. A business group expressed concern that codes can also lead to two moral principles conflicting with each other, such as policies to prevent harm to animals or the environment may inhibit the ability of companies from discovering life-saving treatments or technologies. One multinational company said it upholds homogenous standards globally, so in that sense, U.S. programs could affect its global standards in reporting, building design standards, and worker health and safety. However, the company also noted that its own standards often exceed legal standards. Many respondents agreed that government should play a role in promoting transparency and disclosure of companies’ CSR efforts. Some companies strongly supported a federal role in promoting transparency, yet others warned against regulation and adverse consequences, for example, if U.S. companies face regulatory burdens and are forced to disclose more than their foreign competitors. For example, the Sarbanes-Oxley Act of 2002 was cited by companies as a costly and burdensome mandate. However, some NGOs and investor groups supported government mandating that companies should disclose information on CSR-related issues. Three academic institutions cited recent European regulations on disclosure of CSR issues as a model for the U.S. government. “The single most useful activity of the U.S. government to promote corporate responsibility would be to promote the implementation and enforcement of existing national laws in other countries and to assist national governments in this regard. The majority of countries around the world have adequate laws, but such laws are not implemented or enforced. Commercial activity and private enterprise depend on national governments to set a level playing field so that competitive markets can flourish for the benefit of consumer and society. This requires . . . appropriate legal frameworks in areas such as corporate governance, financial disclosure, bribery and corruption, environmental protection and labor rights.” Some respondents expressed a desire for more coordination among U.S. activities related to global CSR and pointed out that other countries are more involved in CSR than the U.S. government. Some noted that federal efforts are not well coordinated, which can make it difficult for companies to participate in U.S. government activities, and called for increased coordination among U.S. government agencies for CSR activities. Several respondents also expressed a desire for a greater U.S. government role in CSR, stating that the United States is absent from world leadership, especially the European Union, on this issue. According to one company, many European countries are involved in CSR activities; and if the U.S. government does not play a role regarding U.S. companies’ international CSR activities, leadership will go elsewhere. Similarly, another company wanted the U.S. government to participate in the global debate on CSR and to continue its efforts to represent U.S. interests in the face of the European Union’s more regulatory approach to CSR. The globalization of recent decades has increased the breadth and extent of U.S. corporations’ operations in foreign markets, through both increased investment and trade. These globalization trends have led to increased pressure on U.S. multinational corporations to adopt more CSR-related activities in their global operations, particularly for developing countries. Nevertheless, the extent that U.S. multinationals adopt CSR practices continues to vary by industry, location, and by individual firm priorities. At the same time, if the recent CAFTA debate in Congress is any guide, the U.S. government also faces calls to strengthen labor, environmental, and social conditions abroad. Thus, the debate over the right balance between private sector and government roles in achieving these CSR-related goals will likely continue. Important public policy questions have been raised by the trends in globalization and global corporate social responsibility such as whether the U.S. government should adopt an official position regarding global CSR. However, the dichotomy of views regarding the benefits of CSR to business and society complicates any consensus on the appropriate government role. Our research shows that U.S. federal agencies already conduct a number of programs and activities that overlap and/or interact with corporate global CSR efforts. In addition, our interviews with agency officials indicate many view CSR as a useful complementary tool for attaining their broader policy missions. Key private sector players in CSR, meanwhile, indicate that they generally found current U.S. government activities helpful in their voluntary CSR efforts. More generally, it appears that CSR—even if not a substitute for regulation—has resulted in the commitment of U.S. multinational resources, and focus on issues of importance to the U.S. and to host countries. The challenge for the U.S. government is to determine how global CSR fits within the broader range of policy tools directed at achieving sustainable improvements in the quality of life for both U.S. and foreign citizens. We provided a draft of this report to the Administrator, Agency for International Development; the Administrator, Environmental Protection Agency; the President, Export-Import Bank; the President, Inter-American Foundation; the President, Overseas Private Investment Corporation; the Executive Director, Securities and Exchange Commission; the U.S. Trade Representative; and the Secretaries of the Departments of Commerce, Energy, Labor, State, and the Treasury. We received technical comments from the Agency for International Development; the Environmental Protection Agency; the Export-Import Bank; the Inter-American Foundation; the U.S. Trade Representative; and the Departments of Commerce, Labor, and State. We revised the text based on these comments, where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to interested Congressional Committees and to the Agency for International Development; Environmental Protection Agency; the Export-Import Bank; the Inter-American Foundation; the Overseas Private Investment Corporation; the Securities and Exchange Commission; the U.S. Trade Representative; and the Departments of Commerce, Energy, Labor, State, and Treasury. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4347 or at yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Members of the House of Representatives asked us to provide information on the federal involvement in global corporate social responsibility. This report describes (1) global corporate social responsibility (CSR), (2) federal agency policies and programs relating to global CSR, and (3) different perspectives regarding the appropriate U.S. government role and views on the impact of current federal activities on corporate global CSR efforts. To describe global corporate social responsibility, we reviewed business and ethics literature and interviewed corporations and other groups interested in CSR. Specifically, we reviewed documentation from academic institutions, business associations, and multilateral organizations, including the European Commission and the World Bank CSR Practice. However, the information on foreign law in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. We collected major definitions and descriptions of CSR and global CSR and related terms and obtained information on different perspectives that have led to different definitions for CSR and CSR-related terms. To determine what policies and programs U.S. federal agencies have adopted that relate to global CSR, we surveyed federal legislation, reviewed literature, and spoke with agency officials and experts in CSR. To select the federal agencies to involve in our review, we first considered which agencies’ missions suggest possible involvement with promoting, facilitating, or monitoring global corporate social responsibility efforts, which yielded seven agencies. We then added two additional agencies to include all of the agencies that participate in the interagency working group developing a Web portal to publicize the U.S. government programs and resources that promote good corporate practices or CSR. We added the remaining three agencies, following referrals by agency officials or CSR experts, and had discussions with some agency officials to determine if their agencies had relevant programs for this review. The agencies we identified with CSR-related programs were: Department of Energy (DOE), U.S. Environmental Protection Agency (EPA), Export-Import Bank of the U.S. (Ex-Im Bank), Inter-American Foundation (IAF), Overseas Private Investment Corporation (OPIC), U.S. Securities and Exchange Commission (SEC), Department of the Treasury, U.S. Agency for International Development (USAID), and Office of the U.S. Trade Representative (USTR). We identified specific agency programs and policies related to CSR using a two-step process. First, we provided a standard Data Collection Instrument (DCI) with a general description of global CSR to agency officials and asked them to identify current programs, policies, and efforts within their agencies that directly or indirectly promote, facilitate, or monitor global CSR efforts. The description discussed the general elements that a global CSR program can involve, including labor, human rights, environmental and corporate governance efforts. In addition, we also asked agencies about programs that we identified through interviews or literature review. We then sent a more detailed DCI to officials responsible for each identified program to obtain further information, such as the program’s objective, start year, legal basis, targeted groups, and activities. Most of the programs have other goals and objectives, and some only relate to CSR in particular aspects of their activities. We collected budget information and staffing levels, where available, to estimate the level of effort dedicated to the CSR activities by the agency. After we received the responses from the agencies, we followed up with many of the identified federal programs to obtain additional information, which helped us determine whether we should include the program in our review. We also obtained additional documentation from a subset of the programs to verify the information and conducted a thorough review of all of the responses identifying the legal basis for the program/activity. We narrowed down the programs to those that met the following criteria: (1) were ongoing in fiscal year 2003 or afterwards; (2) may affect U.S. corporations’ CSR efforts overseas, including their supply chains (e.g., government to government efforts); and (3) touch on key components of CSR, such as labor, environment, human rights, community development and corporate governance. We also obtained agency concurrence that the program is related to CSR. We excluded programs or activities that are primarily aimed at U.S. corporations’ CSR efforts within the United States, although they may influence a company’s CSR efforts overseas, and efforts that are primarily targeted at the federal government, such as government procurement policies. Due to the lack of federal legislation on, and a generally accepted definition of, corporate social responsibility, there are likely additional programs, policies, and efforts related to global CSR within the federal government that we did not identify. To obtain different perspectives regarding the role of the U.S. government in corporate global CSR efforts, we reviewed CSR literature. In addition, we conducted and synthesized information from a structured interview with 32 individuals representing a diverse variety of groups actively engaged in CSR. We initially identified 25 U.S. companies that were (1) leaders in CSR, based on companies that appeared on the Business Ethics Magazines’s Top 100 to Corporate Citizens list each year from 1999 to 2004 and (2) had international operations. Fourteen of these companies agreed to participate in interviews with us. However, their views may not represent those of all 25 leaders we identified, or those of all U.S. companies. We identified representatives from other groups actively engaged in CSR through a review of CSR literature and referrals from experts and agency officials. We selected these groups and organizations to help us obtain a broad range of knowledgeable and informed views on global CSR and the federal government's role in global CSR; our selection was not intended to be representative in any statistical sense. Groups that are not active in global CSR may have different views and opinions, especially in terms of the federal government's role. Specifically, we interviewed: Fourteen U.S. multinational corporations that appeared on the Business Ethics Magazine’s Top 100 Corporate Citizens list for each year from 1999 to 2004—Brady Corporation; Coors Brewing Company; Cummins, Inc.; Deere & Company; Herman Miller, Inc.; Hewlett-Packard Development Company, L.P.; International Business Machines Corporation; Intel Corporation; Merck & Co., Inc.; Modine Manufacturing Company; Motorola, Inc.; Procter & Gamble; The Timberland Company; and Whirlpool Corporation; Four business interest groups that have been active in CSR—Business for Social Responsibility; the Conference Board; the U.S. Chamber of Commerce Center for Corporate Citizenship; and the U.S. Council for International Business; Four investor groups—Calvert Group, Ltd.; Domini Social Investments, LLC ; Dow Jones Sustainability Index; and the Interfaith Center on Corporate Responsibility; Six nongovernmental organizations—Coalition for Environmentally Responsible Economies; Fair Labor Association; Human Rights Watch; Social Accountability International; World Resources Institute; and Worldwide Responsible Apparel Production; and Four academic institutions—Center for Corporate Citizenship, Boston College; Center for Responsible Business, the Haas School of Business, University of California at Berkeley; the Corporate Social Responsibility Initiative, John F. Kennedy School of Government, Harvard University; the Frank Hawkins Kenan Institute of Private Enterprise, University of North Carolina’s Kenan-Flagler Business School. The structured interview instrument included questions designed to obtain information from these organizations on their definition of CSR and similar terms; efforts related to evaluating the effectiveness of CSR activities; the impact of current U.S. government programs, policies and practices; and opinions regarding the appropriate U.S. government actions or role regarding U.S. companies’ global CSR activities. However, in this report, we do not evaluate the concept of CSR nor the justification or efficacy of any government role with regard to CSR activities. We conducted our work from May 2004 through May 2005 in accordance with generally accepted government auditing standards. This appendix provides a listing and brief description of the 54 programs and activities we identified at 12 U.S. agencies that relate to global CSR. Currently, an inventory of U.S. government efforts related to global corporate social responsibility is unavailable. To develop this list, we provided a standard DCI to 12 agencies with a general description of global CSR to obtain information on current programs, policies, and efforts within their agency that directly or indirectly promote, facilitate, or monitor global CSR efforts. For programs or activities that are interagency in nature, we list the program or activity with the lead agency and indicate other agencies involved with a footnote. Due to the lack of federal legislation on, and a generally accepted definition of, corporate social responsibility, we do not consider this list exhaustive. See appendix I for a more detailed description of our data-collection process. Export-Import Bank Act of 1945, as amended, codified at 12 U.S.C. 635. Foreign buyers and U.S. exporters participating in foreign projects. During FY 2003, Ex-Im Bank screened approximately 70 applications for their potential environmental effects. The Bank’s Engineering and Environment Division undertook formal environmental evaluations of the projects related to 21 separate applications for financing. $531,000 . Three FTEs . Export-Import Bank Act of 1945, as amended, codified at 12 U.S.C. 635. U.S. suppliers of environmentally beneficial products, and participants undertaking projects that are beneficial to the environment. The Environmental Exports Program was instrumental in enabling Ex-Im Bank to support over $173 million of environmentally beneficial goods and services in FY 2003, including $13 million in products and technologies related to renewable energy. $148,000 . 0.80 of an FTE . Authorizing legislation, See 22 U.S.C. 290f. U.S. companies and other organizations operating in the U.S. and abroad. A web portal will be launched in 2005 housing each agency’s activities related to CSR or corporate stewardship. $37,000 was obligated in FY 2003, but activities were carried out in FY 2004. 20-25 percent of one staff person’s time . Authorizing legislation, See 22 U.S.C. 290f. U.S., Latin American and Caribbean corporations and business associations, local governments and NGOs. Supports innovative projects in Latin America and the Caribbean in partnership with companies that want to invest in grassroots development; Facilitates tax-deductible contributions by U.S. corporations to support grassroots development programs in Latin America and the Caribbean; Provides technical assistance to corporate partners to create more sustainable, participatory CSR programs. $1,039,500. 0.75 of an FTE divided among several staff persons . Two full time staff starting in FY 2004. Authorizing legislation, See 22 U.S.C. 290f. U.S. and foreign companies and corporate foundations. Learning exchanges among members, strategy formulation, development of trainings in all countries, mobilizing corporate and other resources; At the end of FY 2004, 52 companies were in the network, several of which represented multiple companies. Authorizing legislation, See 22 U.S.C. 290f. Private, public and nongovernme ntal sectors. Provides funding, participated on steering and operating committee. $35,800 for FY 2004. 10-15 percent of one staff person’s time . The Inter-American Foundation is leading this interagency effort. Additional participating agencies include the Departments of Commerce and State, USAID, and EPA. The full name of the RedEAmérica Initiative is the Inter-American Network of Corporate Foundations and Companies for Grassroots Development. The Inter-American Development Bank is the lead organizer for the conference. The U.S. Department of State has also played a role coordinating U.S. government involvement in the conference. Annual appropriations legislation. Current authority is P. L. 108-447, Div. F, Title 1 (Department of Labor Appropriations Act, 2005). Foreign Governments, workers and employers. Training, equipment provision,drafting of training materials and promotional activities. The program works in a range of sectors and countries in Africa, the Americas, Asia, and in Ukraine. One project in Cambodia is establishing an independent monitoring system to generate reliable information on the implementation of core labor standards in the garment sector. Nine staff work part time on this program. The program has funded several projects for various lengths of time in Bangladesh, Pakistan, Central America, and West Africa that involve industry associations to combat child labor. For example, the program provided a $6 million grant to the International Labor Organization to prevent child labor in the coffee industry in Central America and the Dominican Republic, which included the creation of a child labor monitoring system, among other activities. About $35 million between fiscal years 1999-2004 for all projects working with industry associations. Not available. Annual appropriations legislation. Current authority is P. L. 108-447, Div. F, Title 1 (Department of Labor Appropriations Act, 2005). Children, parents, community leaders, government officials, and industry associations. The International Child Labor Program generally provides technical assistance and funds international projects designed to eliminate the most hazardous and exploitive forms of child labor; researches and reports information to inform U.S. foreign policy, trade policy, and development projects; and raises awareness of the U.S. public to increase their understanding of the issues relating to international child labor and recent efforts to combat the problem. For example, the program works with foreign governments to improve their capacity to handle the issue of child labor and has provided funds to the International Labor Organization to address trafficking of children for labor exploitation. However, the program informed us that they consider their work with industry associations to be most relevant to global corporate social responsibility. Not available. Not available. receiving OPIC support in the form of direct loans, loan guaranties, political risk insurance and “subprojects” obtaining funds from OPIC- supported financial intermediaries. Evaluates each project’s expected impact on development, the environment, and requires projects to meet all applicable host country labor laws or international conventions on labor rights. Not available. Not available. Overseas Private Investment Corporation Amendments Act of 1977, See P. L. 95- 268, Sec. 237(1). U.S. investors that receive OPIC support and the companies in which they invest. All major sponsors of an OPIC financed project must answer questions relating to the Foreign Corrupt Practices Act and OPIC ensures that support does not go to persons and practices restricted by Treasury’s Office of Foreign Assets Control. OPIC monitors loan projects on an ongoing basis. Rule 14a-8 of the Securities Exchange Act of 1934, See 17 C.F.R. 240.14a-8. SEC reporting companies. The Shareholder Proposal Taskforce corresponds with companies regarding requests to exclude shareholder proposals that do not meet the criteria according to Rule 14a-8. Not available. Not available. Three full time staff in FY 2004. Foreign companies and U.S. companies with foreign subsidiaries that have contacts with countries of concern. Review company documents to ensure that companies are aware of the disclosure standard applicable to their operations or contacts. Not available. SEC reporting companies. Securities Act of 1933 and the Securities Exchange Act of 1934. Review company disclosure and provide comments to companies. Not available. Not available. Rule 14a-8 provides shareholders owning more than $2,000 of company stock for more than 1 year with the opportunity to place a proposal in the company’s proxy materials for presentation to a vote at an annual or special meeting of shareholders. The rule generally requires the company to include the proposal unless the shareholder has not complied with the rule’s procedural requirements or the proposal falls within 1 of the 13 substantive bases for exclusion contained in the rule. For some or most of the proposals, the company accepts the proposal or negotiates with the shareholder and the issue never reaches the SEC. However, if a company intends to exclude a proposal from its proxy materials, the company must submit its basis for excluding the proposal to the SEC. The Shareholder Proposal Taskforce reviews these requests for exclusion. Accordingly, the task force considers proposals that address a range of issues, including global CSR issues. State Department Basic Authorities Act of 1956, as amended, See 22 U.S.C. 2651a(c)(2) and 22 U.S.C. 2151n(d)(3). NGOs, governments, U.S. and foreign companies. The Partnership has provided several million dollars to support public and private sector initiatives to establish codes of conduct, encourage effective workplace monitoring and auditing systems, and conduct research, training and education initiatives. The program has funded projects in a number of countries, including China and other Asian countries, Central America, the Middle East and Africa. person works 60 percent of his time and one staff person works 25 percent of her time on this effort . State Department Basic Authorities Act of 1956, as amended, See 22 U.S.C. 2651a(c)(2). NGOs, U.S. and U.K. oil and mining companies, and corporate responsibility organizations. The bureau convenes companies, NGOs, and local governments to implement the principles, and is working to include additional governments. Nearly every major oil and mining company is a participant in the Voluntary Principles process. One staff person works15- 20 percent of his time on this effort . State Department Basic Authorities Act of 1956, as amended, See 22 U.S.C. 2651a(c)(2). U.S. companies, NGOs, foreign governments. Ongoing or planned projects in Equatorial Guinea, Oman and China. Equatorial Guinea - $225,000; Oman - N/A; China - $400,000 in FY 2004. One staff person works 20 percent of his time on this effort . Not available. State Department Basic Authorities Act of 1956, as amended, See 22 U.S.C. 2651a(c)(2). Multilateral organizations and their member states. Prepares guidance for U.S. delegations and responds to requests for information from United Nations organizations when issues arise related to corporate responsibility. DRL also represents the State Department and the U.S. Government at a variety of conferences and meetings related to corporate responsibility, where human rights issues are directly relevant. No specific budget. One staff person works 5 percent of his time on this effort . Mission of the Bureau of Economics and Business Affairs. U.S. small and medium- sized companies and multinational corporations. Award nominations, public ceremony. In FY 2004, the Department received a record number of 50 nominations from U.S. Chiefs of Mission worldwide. One staff person works 30- 40 percent of her time on this effortEstimate]. Presidential initiative. U.S. and Mexican businesses, associations and academic institutions. Award nominations, public ceremony. Seventy nominations were received in FY 2003. More than 900 people attended the Award Ceremony and Gala, which received extensive media coverage, especially in Mexico. Not available. One staff person works 100 percent on the award program from April – June and 30 percent for the remainder of the year . Companies, labor unions and NGOs. No discrete budget. Requirement as signatory to the OECD Declaration and Decisions on International Investment and Multilateral Enterprises. Promotes understanding of the OECD Guidelines and helps companies, labor unions and NGOs in their efforts to resolve issues that may arise with respect to the Guidelines; From 2000-2004, 16 specific instances were brought to the attention of the National Contact Point. One staff person spends 33 percent of his time on this effort and one office director spends 10- 15 percent of his time on this effort . Foreign governments, companies. No separate funding. See Pub. L. 100- 318, The Omnibus Trade and Competitiveness Act of 1988, asking the Executive Branch, led by State, to negotiate a convention on bribery at the OECD. See also Senate Resolution of Advice and Consent to the OECD Antibribery Convention, of July 31, 1998. Leads U.S. delegation to the OECD Working Group on Bribery to monitor implementation and enforcement of the OECD Antibribery Convention, and to assess areas where the Convention could be amended to decrease bribery and other corrupt activity. Meet with the private sector and civil society groups regarding implementation of the OECD Antibribery Convention. One deputy office director spends 50 percent of his time on this effort . 22 U.S.C. 2656. Georgia, $150,000 (cost of contractor who serves as project coordinator). 1.25 FTEs. Nicaragua, Nigeria, Peru, other G-8 governments. Provide assistance through the Bureau of International Narcotics and Law Enforcement Affairs to develop a series of projects with four countries that signed compacts with G-8 countries in 2004 committing to reduce corruption and enhance transparency in their budgets, government procurements and concession-letting procedures. Presidential initiative. Liaise with USAID, Treasury, the private sector and civil society, coordinate U.S. policy toward the EITI, a United Kingdom-led initiative. No separate funding. 0.25 of an FTE. Governments, companies, industry associations, international organizations, civil society, investors. Foreign Assistance Act of 1961, as amended. U.S. companies and business groups, civil society, foreign governments, and international organizations. Meets with representatives from the private sector and civil society on a regular but ad hoc basis, includes representatives from the private sector and civil society on official U.S. delegations relating to sustainable development, provides information about and encourages sustainable development partnership efforts. In addition, OES leads negotiations for environmental side agreements to trade agreements. No specific budget. Not available. Coordinating function operating through the bureau. U.S. commercial, academic or cultural contacts and civic groups. Manages a Web site for donations, coordinates referrals from Congress, and with other agencies, presents to gatherings of individuals or organizations with an interest in supporting reconstruction and humanitarian needs in Iraq. No specific budget. One full- time person. Emergency Wartime Supplemental Appropriations Act, 2003 (See P. L. 108-11) and chapter 4 of Part II of the Foreign Assistance Act of 1961, as amended. U.S. companies. U.S. companies host interns for three months at their own expense. In fiscal year 2003, more than 35 companies hosted interns. $2,000,000 One part time plus contractor support. U.S. and foreign companies. Emergency Wartime Supplemental Appropriations Act, 2003 (See Pub. L. 108-11) and chapter 4 of Part II of the Foreign Assistance Act of 1961, as amended. Through a cooperative agreement to the Junior Achievement program, MEPI is setting up chapters throughout the region to promote entrepreneurship, such as job training, among high school- aged youth. U.S. and foreign companies serve as long-term sponsors and mentors for this program. $2,400,000One part time plus contractor support. United Nations Participation Act of 1945. Not available. United Nations programs, funds, agencies and other organizations. Negotiations over resolutions, work programs and budgets in United Nations organizations, and reviews of programs and activities. No specific budget. Several staff in this bureau devote time to CSR on an ad hoc basis. The U.S. Leadership Against HIV/AIDS, Tuberculosis, and Malaria Act of 2003, See Section 101 of P.L. 108-25. Private sector. Not applicable. The private sector is a critical partner at the country level. These partnerships facilitate company workplace programs to create awareness about the spread of HIV/AIDS, decrease stigma among those who know their HIV status, and provide antiretroviral therapy to employees and their families. The office began keeping track of the number of partnerships in FY 2005. No specific budget. The Partnership for Prosperity is a bilateral initiative between Mexico and the United States designed to leverage private sector resources and expertise to boost the social and economic well-being of Mexican citizens, particularly in regions where economic growth has lagged. In addition to the Department of State, USTR, EPA and the Departments of Treasury, Commerce, and Labor help resolve complaints against companies. The guidelines are a set of nonbinding recommendations that have been agreed upon by OECD member countries. Their aim is to provide guidance for companies on a range of business activities, including industrial relations, human rights, environment, information disclosure, competition, taxation, and science and technology. The Department of State also coordinates with the Departments of Commerce and Justice to address, as appropriate, alleged incidents of bribery of foreign public officials (by foreign-based corporations) that adversely affect the opportunity for U.S. companies to compete on a transparent and level playing field for international tenders and contracts. OGAC is not an implementing office. Actual implementation of partnerships with the private sector and other workplace activities are put forth from its implementing agency partners, primarily USAID and The Department of Health and Human Services (HHS). Clean Diamond Trade Act (P.L. 108-19), Executive Order 13312, and Rough Diamonds Control Regulations, 31 C.F.R. part 592 (Regulations). Companies or individuals involved in the export from and/or import into the United States of rough diamonds. The Regulations provide that trade in rough diamonds is prohibited unless the rough diamond is controlled through the Kimberley Process Certification Scheme as set forth in the Regulations. The U.S. also participates in Kimberley Process multilateral working groups on Monitoring and Statistics. No specific budget. Three FTEs. To encourage public- private partnerships for development projects. Foreign Assistance Act of 1961 (P. L. 87- 195), as amended. Foundations, for-profit firms, civil society organizations, foreign governments. Trains USAID staff on public- private alliances and conducts outreach to private sector and civil society partners. For fiscal years 2002-2004, USAID leveraged over $3.7 billion in partner assets through $1.1 billion in agency funding. staff plus contractors and field support. Executive Order 13317. To deploy skilled volunteers in U.S. foreign assistance programs. U.S.-based organizations, including corporations. By the end of FY 2004, Volunteers for Prosperity recruited nearly 200 for-profit and nonprofit organizations, representing a pool of at least 34,000 skilled American professionals available to serve as volunteers. Participating organizations reported having deployed nearly 7,000 volunteers. No specific budget. Three full time staff. USAID serves as the interagency coordinator for this initiative. Per the executive order, USAID, the Departments of State, Commerce and Health and Human Services were required to set up Volunteers for Prosperity offices or operating units. 2002. Trade Act of 2002. Foreign governments. No discrete budget. Negotiating terms of trade agreements with U.S. trading partners. USTR will consider including CSR issues and projects in trade agreements if the issue is raised by trading partners. For example, CSR language is included in the US- Chile and US- Singapore free trade agreements. FTA negotiators address CSR as warranted during negotiations. Not available. Not available. Business groups. One percent of one staff person’s time. USTR meets with business groups on an ad hoc basis to discuss a range of issues. On occasion, this includes encouraging businesses to implement corporate codes of conduct. No discrete budget. In addition, Kate Blumenreich, Kenneth Bombara, Martin De Alteriis, Mark Dowling, Tim Fairbanks, and Kim Frankena made key contributions to this report. Shirley Brothwell, Emilie Cassou, Jeanette Espinola and Richard Lindsey also provided assistance.
The trend toward globalization has intensified the debate about the proper role of business and government in global "corporate social responsibility" (CSR),which involves business efforts to address the social and environmental concerns associated with business operations. The growth in global trade and the dramatic increase in foreign direct investment in developing countries raise questions regarding CSR-related issues such as labor, environment, and human rights. U.S. firms with operations in many countries employ millions of foreign workers and conduct a range of CSR activities to address these issues. However, there is controversy as to the proper government role. GAO describes (1) federal agency policies and programs relating to global CSR and (2) different perspectives regarding the appropriate U.S. government role and views on the impact of current federal activities on corporate global CSR efforts. Although there is no broad federal CSR mandate, we identified 12 U.S. agencies with over 50 federal programs, policies, and activities that generally fall into four roles of endorsing, facilitating, partnering, or mandating CSR activities. Many of these programs have small budgets and staff and aim to accomplish broader agency mission goals, rather than being specifically designed to facilitate or promote companies' global CSR activities. The U.S. government endorses CSR by providing awards to companies, such as the Department of State's Award for Corporate Excellence. Federal programs facilitate CSR by such activities as providing information or providing funding to engage in CSR. For example, a Department of Commerce program facilitates CSR by providing training on corporate stewardship. Some agencies partner with corporations on specific projects related to their core mission. For example, the U.S. Agency for International Development (USAID) partnered with one U.S. corporation operating in post-war Angola to build up the country's business sector and workforce. Other agencies, such as the Overseas Private Investment Corporation, mandate CSR by requiring companies to meet CSR-related criteria to obtain their services. While perspectives on the government's role are tied to perspectives on CSR and its connection to profit, many we spoke with who are actively involved in global CSR desired a government role supporting business's voluntary CSR efforts. Those with a free-market economic perspective believe corporations should be primarily concerned with earning a profit and government should not promote CSR as it reduces profits. Those with a "business case" perspective often welcome government assistance with their voluntary efforts because they view their CSR efforts as increasing profits and business value. Finally, those with a social issues perspective believe that business should contribute to broader social goals but split on whether business action should be voluntary or mandatory. Most groups we spoke with at U.S. companies and others actively engaged in CSR were generally supportive of U.S. federal agency efforts to endorse and facilitate CSR and partner with companies voluntarily pursuing CSR actions. For example, several groups supported a government role in providing CSR-related information and convening stakeholders to address CSR-related issues.
As of June 2008, there were approximately 58 million first-lien home mortgages outstanding in the United States. According to a Federal Reserve estimate, outstanding home mortgages represented over $10 trillion in mortgage debt. The primary mortgage market has several segments and offers a range of loan products: The prime market segment serves borrowers with strong credit histories and provides the most competitive interest rates and mortgage terms. The subprime market segment generally serves borrowers with blemished credit and features higher interest rates and fees than the prime market. The Alternative-A (Alt-A) market segment generally serves borrowers whose credit histories are close to prime, but the loans often have one or more higher-risk features, such as limited documentation of income or assets. The government-insured or -guaranteed market segment primarily serves borrowers who may have difficulty qualifying for prime mortgages but features interest rates competitive with prime loans in return for payment of insurance premiums or guarantee fees. Across all of these market segments, two types of loans are common: fixed-rate mortgages, which have interest rates that do not change over the life of the loans, and adjustable-rate mortgages (ARM), which have interest rates that change periodically based on changes in a specified index. Delinquency, default and foreclosure rates are common measures of loan performance. Delinquency is the failure of a borrower to meet one or more scheduled monthly payments. Default generally occurs when a borrower is 90 or more days delinquent. At this point, foreclosure proceedings against the borrower become a strong possibility. Foreclosure is a legal (and often lengthy) process with several possible outcomes, including that the borrower sells the property or the lender repossesses the home. Two measures of foreclosure are foreclosure starts (loans that enter the foreclosure process during a particular time period) and foreclosure inventory (loans that are in, but have not exited, the foreclosure process during a particular time period). One of the main sources of information on the status of mortgage loans is the Mortgage Bankers Association’s quarterly National Delinquency Survey. The survey provides national and state-level information on mortgage delinquencies, defaults, and foreclosures back to 1979 for first- lien purchase and refinance mortgages on one-to-four family residential units. The data are disaggregated by market segment and loan type— fixed-rate versus adjustable-rate—but do not contain information on other loan or borrower characteristics. In response to problems in the housing and financial markets, the Housing and Economic Recovery Act of 2008 was enacted to strengthen and modernize the regulation of the government-sponsored enterprises (GSEs)—Fannie Mae, Freddie Mac, and the Federal Home Loan Banks— and expand their mission of promoting homeownership. The act established a new, independent regulator for the GSEs called the Federal Housing Finance Agency, which has broad new authority, generally equivalent to the authority of other federal financial regulators, to ensure the safe and sound operations of the GSEs. The new legislation also enhances the affordable housing component of the GSEs’ mission and expands the number of families Fannie Mae and Freddie Mac can serve by raising the loan limits in high-cost areas, where median house prices are higher than the regular conforming loan limit, to 150 percent of that limit. The act requires new affordable housing goals for Federal Home Loan Bank mortgage purchase programs, similar to those already in place for Fannie Mae and Freddie Mac. The act also established the HOPE for Homeowners program, which the Federal Housing Administration (FHA) will administer within the Department of Housing and Urban Development (HUD), to provide federally insured mortgages to distressed borrowers. The new mortgages are intended to refinance distressed loans at a significant discount for owner-occupants at risk of losing their homes to foreclosure. In exchange, homeowners share any equity created by the discounted restructured loan as well as future appreciation with FHA, which is authorized to insure up to $300 billion in new loans under this program. Additionally, the borrower cannot take out a second mortgage for the first five years of the loan, except under certain circumstances for emergency repairs. The program became effective October 1, 2008, and will conclude on September 30, 2011. To participate in the HOPE for Homeowners program, borrowers must also meet specific eligibility criteria as follows: Their mortgage must have originated on or before January 1, 2008. They must have made a minimum of six full payments on their existing first mortgage and must not have intentionally missed mortgage payments. They must not own a second home. Their mortgage debt-to-income ratio for their existing mortgage must be greater than 31 percent. They must not knowingly or willfully have provided false information to obtain the existing mortgage and must not have been convicted of fraud in the last 10 years. The Emergency Economic Stabilization Act, passed by Congress and signed by the President on October 3, 2008, created TARP, which outlines a troubled asset purchase and insurance program, among other things. The total size of the program cannot exceed $700 billion at any given time. Authority to purchase or insure $250 billion was effective on the date of enactment, with an additional $100 billion in authority available upon submission of a certification by the President. A final $350 billion is available under the act but is subject to Congressional review. The legislation required that financial institutions that sell troubled assets to Treasury also provide a warrant giving Treasury the right to receive shares of stock (common or preferred) in the institution or a senior debt instrument from the institution. The terms and conditions of the warrant or debt instrument must be designed to (1) provide Treasury with reasonable participation in equity appreciation or with a reasonable interest rate premium, and (2) provide additional protection for the taxpayer against losses from the sale of assets by Treasury and the administrative expenses of TARP. To the extent that Treasury acquires troubled mortgage-related assets, the act also directs Treasury to encourage servicers of the underlying loans to take advantage of the HOPE for Homeowners Program. Treasury is also required to consent, where appropriate, to reasonable requests for loan modifications from homeowners whose loans are acquired by the government. The act also requires the Federal Housing Finance Agency, the Federal Deposit Insurance Corporation (FDIC), and the Federal Reserve Board to implement a plan to maximize assistance to homeowners, that may include reducing interest rates and principal on residential mortgages or mortgage-backed securities owned or managed by these institutions. The regulators have also taken steps to support the mortgage finance system. On November 25, 2008, the Federal Reserve announced that it would purchase up to $100 billion in direct obligations of the GSEs (Fannie Mae, Freddie Mac, and the Federal Home Loan Banks), and up to $500 billion in mortgage-backed securities backed by Fannie Mae, Freddie Mac, and Ginnie Mae. It undertook the action to reduce the cost and increase the availability of credit for home purchases, thereby supporting housing markets and improving conditions in financial markets more generally. Also, on November 12, 2008, the four financial institution regulators issued a joint statement underscoring their expectation that all banking organizations fulfill their fundamental role in the economy as intermediaries of credit to businesses, consumers, and other creditworthy borrowers, and that banking organizations work with existing mortgage borrowers to avoid preventable foreclosures. The regulators further stated that banking organizations need to ensure that their mortgage servicing operations are sufficiently funded and staffed to work with borrowers while implementing effective risk-mitigation measures. Finally, on November 11, 2008, the Federal Housing Finance Agency (FHFA) announced a streamlined loan modification program for home mortgages controlled by the GSEs. Most mortgages are bundled into securities called residential mortgage- backed securities that are bought and sold by investors. These securities may be issued by GSEs and private companies. Privately issued mortgage- backed securities, known as private label securities, are typically backed by mortgage loans that do not conform to GSE purchase requirements because they are too large or do not meet GSE underwriting criteria. Investment banks bundle most subprime and Alt-A loans into private label residential mortgage-backed securities. The originator/lender of a pool of securitized assets usually continues to service the securitized portfolio. Servicing includes customer service and payment processing for the borrowers in the securitized pool and collection actions in accordance with the pooling and servicing agreement. The decision to modify loans held in a mortgage-backed security typically resides with the servicer. According to some industry experts, the servicer may be limited by the pooling and servicing agreement with respect to performing any large- scale modification of the mortgages that the security is based upon. However, others have stated that the vast majority of servicing agreements do not preclude or routinely require investor approval for loan modifications. We have not assessed how many potentially troubled loans face restrictions on modification. National default and foreclosure rates rose sharply during the 3-year period from the second quarter of 2005 through the second quarter of 2008 to the highest level in 29 years (fig.1). More specifically, default rates more than doubled over the 3-year period, growing from 0.8 percent to 1.8 percent. Similarly, foreclosure start rates—representing the percentage of loans that entered the foreclosure process each quarter—grew almost three-fold, from 0.4 percent to 1 percent. Put another way, nearly half a million mortgages entered the foreclosure process in the second quarter of 2008, compared with about 150,000 in the second quarter of 2005. Finally, foreclosure inventory rates rose 175 percent over the 3-year period, increasing from 1.0 percent to 2.8 percent, with most of that growth occurring since the second quarter of 2007. As a result, almost 1.25 million loans were in the foreclosure inventory as of the second quarter of 2008. Default and foreclosure rates varied by market segment and product type, with subprime and adjustable-rate loans experiencing the largest increases during the 3-year period we examined. More specifically: In the prime market segment, which accounted for more than three- quarters of the mortgages being serviced, 2.4 percent of loans were in default or foreclosure by the second quarter of 2008, up from 0.7 percent 3 years earlier. Foreclosure start rates for prime loans began the period at relatively low levels (0.2 percent) but rose sharply on a percentage basis, reaching 0.6 percent in the second quarter of 2008. In the subprime market segment, about 18 percent of loans were in default or foreclosure by the second quarter of 2008, compared with 5.8 percent 3 years earlier. Subprime mortgages accounted for less than 15 percent of the loans being serviced, but over half of the overall increase in the number of mortgages in default and foreclosure over the period. Additionally, foreclosure start rates for subprime loans more than tripled, rising from 1.3 percent to 4.3 percent (see fig. 2). In the government-insured or -guaranteed market segment, which represented about 10 percent of the mortgages being serviced, 4.8 percent of the loans were in default or foreclosure in the second quarter of 2008, up from 4.5 percent 3 years earlier. Additionally, foreclosure start rates in this segment increased modestly, from 0.7 to 0.9 percent. ARMs accounted for a disproportionate share of the increase in the number of loans in default and foreclosure in the prime and subprime market segments over the 3-year period. In both the prime and subprime market segments, ARMs experienced relatively steeper increases in default and foreclosure rates, compared with more modest growth for fixed rate mortgages. In particular, foreclosure start rates for subprime ARMs more than quadrupled over the 3-year period, increasing from 1.5 percent to 6.6 percent. Default and foreclosure rates also varied significantly among states. For example, as of the second quarter of 2008, the percentage of mortgages in default or foreclosure ranged from 1.1 percent in Wyoming to 8.4 percent in Florida. Other states that had particularly high combined rates of default and foreclosure included California (6.0 percent), Michigan (6.2 percent), Nevada (7.6 percent), and Ohio (6.0 percent). Every state in the nation experienced growth in their foreclosure start rates from the second quarter of 2005 through the second quarter of 2008. By the end of that period, foreclosure start rates were at their 29-year maximums in 17 sta As shown in figure 3, percentage increases in foreclosure start rates differed dramatically by state. The foreclosure start rate rose at least 10 percent in every state over the 3-year period, but 23 states experienced a increase of 100 percent or more. Several states in the “Sun Belt” region, such as Arizona, California, Florida, and Nevada, had among the highest percentage increases in foreclosure start rates. In contrast, 7 states experienced increases of 30 percent or less, including North Carolin Oklahoma, and Utah. tes. Treasury is currently examining strategies for homeownership preservation, including maximizing loan modifications, in light of a refocus in its use of TARP funds. Treasury’s initial focus in implementing TARP was to stabilize the financial markets and stimulate lending to businesses and consumers by purchasing troubled mortgage-related assets— securities and whole loans—from financial institutions. Treasury planned to use its leverage as a major purchaser of troubled mortgages to work with servicers and achieve more aggressive mortgage modification standards. However, Treasury subsequently concluded that purchasing troubled assets would take time to implement and would not be sufficient given the severity of the problem. Instead, Treasury determined that the most timely, effective way to improve credit market conditions was to strengthen bank balance sheets quickly through direct purchases of equit in banks. y The standard agreement between Treasury and the participating institutions in the CPP includes a number of provisions, some in the “recitals” section at the beginning of the agreement and other detailed terms in the body of the agreement. The recitals refer to the participating institutions’ future actions in general terms—for example, “the Company agrees to work diligently, under existing programs to modify the terms of residential mortgages as appropriate to strengthen the health of the U.S. housing market.” Treasury and the regulators have publicly stated that they expect these institutions to use the funds in a manner consistent with the goals of the program, which include both the expansion of the flow of credit and the modification of the terms of residential mortgages. But, to date it remains unclear how OFS and the regulators will monitor how participating institutions are using the capital injections to advance the purposes of the act. The standard agreement between Treasury and the participating institutions does not require that these institutions track or report how they use or plan to use their capital investments. In our first 60-day report to Congress on TARP, mandated by the Emergency Economic Stabilization Act, we recommended that Treasury, among other things, work with the bank regulators to establish a systematic means for determining and reporting on whether financial institutions’ activities are generally consistent with the purposes of CPP. Without purchasing troubled mortgage assets as an avenue for preserving homeownership, Treasury is considering other ways to meet this objective. Treasury has established and appointed an interim chief for the Office of the Chief of Homeownership Preservation under OFS. According to Treasury officials, the office is currently staffed with federal government detailees and is in the process of hiring individuals with expertise in housing policy, community development and economic research. Treasury has stated that it is working with other federal agencies, including FDIC, HUD, and FHFA to explore options to help homeowners under TARP. According to the Office of Homeownership Preservation interim chief, Treasury is considering a number of factors in its review of possible loan modification options, including the cost of the program, the extent to which the program minimizes recidivism among borrowers helped out of default, and the number of homeowners the program has helped or is projected to help remain in their homes. However, to date the Treasury has not completed its strategy for preserving homeownership. Among the strategies for loan modification that Treasury is considering is a proposal by FDIC that is based on its experiences with loans held by a bank that was recently put in FDIC conservatorship. The former IndyMac Bank, F.S.B., was closed July 11, 2008, and FDIC was appointed the conservator for the new institution, IndyMac Federal Bank, F.S.B. As a result, FDIC inherited responsibility for servicing a pool of approximately 653,000 first-lien mortgage loans, including more than 60,000 mortgage loans that were more than 60 days past due, in bankruptcy, in foreclosure, and otherwise not currently paying. On August 20, 2008, the FDIC announced a program to systematically modify troubled residential loans for borrowers with mortgages owned or serviced by IndyMac Federal. According to FDIC, the program modifies eligible delinquent mortgages to achieve affordable and sustainable payments using interest rate reductions, extended amortization, and where necessary, deferring a portion of the principal. FDIC has stated that by modifying the loans to an affordable debt-to-income ratio (38 percent at the time) and using a menu of options to lower borrowers’ payments for the life of their loan, the program improves the value of the troubled mortgages while achieving economies of scale for servicers and stability for borrowers. According to FDIC, as of November 21, 2008, IndyMac Federal has mailed more than 23,000 loan modification proposals to borrowers and over 5,000 borrowers have accepted the offers and are making payments on modified mortgages. FDIC states that monthly payments on these modified mortgages are, on average, 23 percent or approximately $380 lower than the borrower’s previous monthly payment of principal and interest. According to FDIC, a federal loss sharing guarantee on re-defaults of modified mortgages under TARP could prevent as many as 1.5 million avoidable foreclosures by the end of 2009. FDIC estimated that such a program, including a lower debt- to-income ratio of 31 percent and a sharing of losses in the event of a re- default, would cost about $24.4 billion on an estimated $444 billion of modified loans, based on an assumed re-default rate of 33 percent. We have not had an opportunity to independently analyze these estimates and assumptions. Other similar programs under review, according to Treasury, include strategies to guarantee loan modifications by private lenders, such as the HOPE for Homeowners program. Under this new FHA program, lenders can have loans in their portfolio refinanced into FHA-insured loans with fixed interest rates. HERA had limited the new insured mortgages to no more than 90 percent of the property’s current appraised value. However, on November 19, 2008, after action by the congressionally created Board of Directors of the HOPE for Homeowners program, HUD announced that the program had been revised to, among other things, increase the maximum amount of the new insured mortgages in certain circumstances. Specifically, the new insured mortgages cannot exceed 96.5 percent of the current appraised value for borrowers whose mortgage payments represent no more than 31 percent of their monthly gross income and monthly household debt payments no more than 43 percent of monthly gross income. Alternatively, the new mortgage may be set at 90 percent of the current appraised value for borrowers with monthly mortgage and household debt-to-income ratios as high as 38 and 50 percent, respectively. These loan-to-value ratio maximums mean that in many circumstances the amount of the restructured loan would be less than the original loan amount and, therefore, would require lenders to write down the existing mortgage amounts. According to FHA, lenders benefit by turning failing mortgages into performing loans. Borrowers must also share a portion of the equity resulting from the new mortgage and the value of future appreciation. This program first became available October 1, 2008. FHA has listed on the program’s Web site over 200 lenders that, as of November 25, 2008, have indicated to FHA an interest in refinancing loans under the HOPE for Homeowners program. See the appendix to this statement for examples of federal government and private sector residential mortgage loan modification programs. Treasury is also considering policy actions that might be taken under CPP to encourage participating institutions to modify mortgages at risk of default, according to an OFS official. While not technically part of CPP, Treasury announced on November 23, 2008, that it will invest an additional $20 billion in Citigroup from TARP in exchange for preferred stock with an 8 percent dividend to the Treasury. In addition, Treasury and FDIC will provide protection against unusually large losses on a pool of loans and securities on the books of Citigroup. The Federal Reserve will backstop residual risk in the asset pool through a non-recourse loan. The agreement requires Citigroup to absorb the first $29 billion in losses. Subsequent losses are shared between the government (90 percent) and Citigroup (10 percent). As part of the agreement, Citigroup will be required to use FDIC loan modification procedures to manage guaranteed assets unless otherwise agreed. Although any program for modifying loans faces a number of challenges, particularly when the loans or the cash flows related to them have been bundled into securities that are sold to investors, foreclosures not only affect those losing their homes but also their neighborhoods and have contributed to increased volatility in the financial markets. Some of the challenges that loan modification programs face include making transparent to investors the analysis supporting the value of modification over foreclosure, designing the program to limit the likelihood of re- default, and ensuring that the program does not encourage borrowers who otherwise would not default to fall behind on their mortgage payments. Additionally, there are a number of potential obstacles that may need to be addressed in performing large-scale modification of loans supporting a mortgage-backed security. As noted previously, the pooling and servicing agreements may preclude the servicer from making any modifications of the underlying mortgages without approval by the investors. In addition, many homeowners may have second liens on their homes that may be controlled by a different loan servicer, potentially complicating loan modification efforts. Treasury also points to challenges in financing any new proposal. The Secretary of the Treasury, for example, noted that it was important to distinguish between the type of assistance, which could involve direct spending, from the type of investments that are intended to promote financial stability, protect the taxpayer, and be recovered under the TARP legislation. However, he recently reaffirmed that maximizing loan modifications was a key part of working through the housing correction and maintaining the quality of communities across the nation. However, Treasury has not specified how it intends to meet its commitment to loan modification. We will continue to monitor Treasury’s efforts as part of our ongoing TARP oversight responsibilities. Going forward, the federal government faces significant challenges in effectively deploying its resources and using its tools to bring greater stability to financial markets and preserving homeownership and protecting home values for millions of Americans. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. Eligible borrowers are those with loans owned or serviced by IndyMac Federal Bank Affordable mortgage payment achieved for the seriously delinquent or in default borrower through interest rate reduction, amortization term extension, and/or principal forbearance Payment must be no more than 38 percent of the borrower’s monthly gross income Losses to investor minimized through a net present value test that confirms that the modification will cost the investor less than foreclosure Borrowers can refinance into an affordable loan insured by FHA Eligible borrowers are those who, among other factors, as of March 2008, had total monthly mortgage payments due of more than 31 percent of their gross monthly income New insured mortgages cannot exceed 96.5 percent of the current loan-to-value ratio (LTV) for borrowers whose mortgage payments do not exceed 31 percent of their monthly gross income and total household debt not to exceed 43 percent; alternatively, the program allows for a 90 percent LTV for borrowers with debt-to-income ratios as high as 38 (mortgage payment) and 50 percent (total household debt) Requires lenders to write down the existing mortgage amounts to either of the two LTV Eligible borrowers are those who, among other factors, have missed three payments or more Servicers can modify existing loans into a Freddie Mae or Fannie Mac loan, or a portfolio loan with a participating investor An affordable mortgage payment, of no more than 38 percent of the borrower’s monthly gross income, is achieved for the borrower through a mix of reducing the mortgage interest rate, extending the life of the loan or deferring payment on part of the principal Eligible borrowers are those with subprime or pay option adjustable rate mortgages serviced by Countrywide and originated by Countrywide prior to December 31, 2007 Options for modification include refinance under the FHA HOPE for Homeowners program, interest rate reductions, and principal reduction for pay option adjustable rate mortgages First-year payments mortgage payments will be targeted at 34 percent of the borrower’s income, but may go as high as 42 percent Annual principal and interest payments will increase at limited step-rate adjustments Affordable mortgage payment achieved for the borrower at risk of default through interest rate reduction and/or principal forbearance Modification may also include modifying pay-option ARMs to 30-year, fixed-rate loans or interest-only payments for 10 years Modification includes flexible eligibility criteria on origination dates, loan-to-value ratios, rate floors and step-up adjustment features This program was created in consultation with Fannie Mae, Freddie Mac, HOPE NOW and its twenty-seven servicer partners, the Department of the Treasury, FHA and FHFA. For further information about this statement, please contact Mathew J. Scire, Director, Financial Markets and Community Investment, on (202) 512-8678 or sciremj@gao.gov. In addition to the contact named above the following individuals from GAO’s Financial Markets and Community Investment Team also made major contributors to this testimony: Harry Medina and Steve Westley, Assistant Directors; Jamila Jones and Julie Trinder, Analysts-in-Charge; Jim Vitarello, Senior Analyst; Rachel DeMarcus, Assistant General Counsel; and Emily Chalmers and Jennifer Schwartz, Communications Analysts. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A dramatic increase in mortgage loan defaults and foreclosures is one of the key contributing factors to the current downturn in the U.S. financial markets and economy. In response, Congress passed and the President signed in July the Housing and Economic Recovery Act of 2008 and in October the Emergency Economic Stabilization Act of 2008 (EESA), which established the Office of Financial Stability (OFS) within the Department of the Treasury and authorized the Troubled Asset Relief Program (TARP). Both acts establish new authorities to preserve homeownership. In addition, the administration, independent financial regulators, and others have undertaken a number of recent efforts to preserve homeownership. GAO was asked to update its 2007 report on default and foreclosure trends for home mortgages, and describe the OFS's efforts to preserve homeownership. GAO analyzed quarterly default and foreclosure data from the Mortgage Bankers Association for the period 1979 through the second quarter of 2008 (the most recent quarter for which data were available). GAO also relied on work performed as part of its mandated review of Treasury's implementation of TARP, which included obtaining and reviewing information from Treasury, federal agencies, and other organizations (including selected banks) on home ownership preservation efforts. To access GAO's first oversight report on Treasury's implementation of TARP, see GAO-09-161 . Default and foreclosure rates for home mortgages rose sharply from the second quarter of 2005 through the second quarter of 2008, reaching a point at which more than 4 in every 100 mortgages were in the foreclosure process or were 90 or more days past due. These levels are the highest reported in the 29 years since the Mortgage Bankers Association began keeping complete records and are based on its latest available data. The subprime market, which consists of loans to borrowers who generally have blemished credit and that feature higher interest rates and fees, experienced substantially steeper increases in default and foreclosure rates than the prime or government-insured markets, accounting for over half of the overall increase. In the prime and subprime market segments, adjustable-rate mortgages experienced steeper growth in default and foreclosure rates than fixed-rate mortgages. Every state in the nation experienced growth in the rate at which loans entered the foreclosure process from the second quarter of 2005 through the second quarter of 2008. The rate rose at least 10 percent in every state over the 3-year period, but 23 states experienced an increase of 100 percent or more. Several states in the "Sun Belt" region, including Arizona, California, Florida, and Nevada, had among the highest percentage increases. OFS initially intended to purchase troubled mortgages and mortgage-related assets and use its ownership position to influence loan servicers and to achieve more aggressive mortgage modification standards. However, within two weeks of EESA's passage, Treasury determined it needed to move more quickly to stabilize financial markets and announced it would use $250 billion of TARP funds to inject capital directly into qualified financial institutions by purchasing equity. In recitals to the standard agreement with Treasury, institutions receiving capital injections state that they will work diligently under existing programs to modify the terms of residential mortgages. It remains unclear, however, how OFS and the banking regulators will monitor how these institutions are using the capital injections to advance the purposes of the act, including preserving homeownership. As part of its first TARP oversight report, GAO recommended that Treasury, among other things, work with the bank regulators to establish a systematic means for determining and reporting on whether financial institutions' activities are generally consistent with program goals. Treasury also established an Office of Homeownership Preservation within OFS that is reviewing various options for helping homeowners, such as insuring troubled mortgage-related assets or adopting programs based on the loan modification efforts of FDIC and others, but it is still working on its strategy for preserving homeownership. While Treasury and others will face a number of challenges in undertaking loan modifications, including making transparent to investors the analysis supporting the value of modification versus foreclosure, rising defaults and foreclosures on home mortgages underscore the importance of ongoing and future efforts to preserve homeownership. GAO will continue to monitor Treasury's efforts as part of its mandated TARP oversight responsibilities.
Medicare covers up to 100 days of care in a SNF after a beneficiary has been hospitalized for at least 3 days. To qualify for the benefit, the patient must need skilled nursing or therapy on a daily basis. For the first 20 days of SNF care, Medicare pays all the costs, and for the 21st through the 100th day, the beneficiary is responsible for daily coinsurance of $95 in 1997. To qualify for home health care, a beneficiary must be confined to his or her residence (“homebound”); require part-time or intermittent skilled nursing, physical therapy, or speech therapy; be under the care of a physician; and have the services furnished under a plan of care prescribed and periodically reviewed by a physician. If these conditions are met, Medicare will pay for skilled nursing; physical, occupational, and speech therapy; medical social services; and home health aide visits. Beneficiaries are not liable for any coinsurance or deductibles for these home health services, and there is no limit on the number of visits for which Medicare will pay. for each type of visit (skilled nursing, physical therapy, and so on) but are applied in the aggregate; that is, an agency’s costs over the limit for one type of visit can be offset by costs below the limit for another. Both SNF and home health cost limits are adjusted for differences in wage levels across geographic areas. Also, exemptions from and exceptions to the cost limits are available to SNFs and home health agencies that meet certain conditions. While the cost-limit provisions of Medicare’s cost reimbursement system for SNFs and home health agencies give some incentives for providers to control the affected costs, these incentives are considered by health financing experts to be relatively weak, especially for providers with costs considerably below their limit. On the other hand, it is generally agreed that prospective payment systems (PPS) give providers increased cost- control incentives. The administration proposes establishing PPSs for SNF and home health care and estimates that Medicare savings exceeding $10 billion would result over the next 5 fiscal years. The Medicare SNF and home health benefits are two of the fastest growing components of Medicare spending. From 1989 to 1996, Medicare part A SNF expenditures increased over 300 percent from $2.8 billion to $11.3 billion. During the same period, part A expenditures for home health increased from $2.4 billion to $17.7 billion—an increase of over 600 percent. SNF and home health payments currently represent 8.6 percent and 13.5 percent of part A Medicare expenditures, respectively. At Medicare’s inception in 1966, the home health benefit under part A provided limited posthospital care of up to 100 visits per year after a hospitalization of at least 3 days. In addition, the services could only be provided within 1 year after the patient’s discharge and had to be for the same illness. Part B coverage of home health was limited to 100 visits per year. These restrictions under part A and part B were eliminated by the Omnibus Reconciliation Act of 1980 (ORA, P.L. 96-499), but little immediate effect on Medicare costs occurred. for the SNF and home health benefits that had the effect of liberalizing coverage criteria, thereby making it easier for beneficiaries to obtain SNF and home health coverage. Additionally, the changes prevent HCFA’s claims processing contractors from denying physician-ordered SNF or home health services unless the contractors can supply specific clinical evidence that indicates which particular services should not be covered. The combination of these legislative and coverage policy changes has had a dramatic effect on utilization of these two benefits in the 1990s, both in terms of the number of beneficiaries receiving services and in the extent of these services. (App. I contains figures that show growth in SNF and home health expenditures in relation to the legislative and policy changes.) For example, ORA 1980 and HCFA’s 1989 home health guideline changes have essentially transformed the home health benefit from one focused on patients needing short-term care after hospitalization to one that serves chronic, long-term care patients as well. The number of beneficiaries receiving home health care more than doubled in the last few years, from 1.7 million in 1989 to about 3.9 million in 1996. During the same period, the average number of visits to home health beneficiaries also more than doubled, from 27 to 72. In a recent report on home health, we found that from 1989 to 1993, the proportion of home health users receiving more than 30 visits increased from 24 percent to 43 percent and those receiving more than 90 visits tripled, from 6 percent to 18 percent, indicating that the program is serving a larger proportion of longer-term patients. Moreover, about a third of beneficiaries receiving home health care did not have a prior hospitalization, another possible indication that chronic care is being provided. their use is done by Medicare. Moreover, SNFs can cite high ancillary service use to justify an exception to routine service cost limits, thereby increasing routine service payments. Between 1990 and 1996, the number of hospital-based SNFs increased over 80 percent, from 1,145 such agencies to 2,088. Hospitals can benefit from establishing a SNF unit in a number of ways. Hospitals receive a set fee for a patient’s entire hospital stay, based on a patient’s diagnosis related group (DRG). Therefore, the quicker that hospitals discharge a patient into a SNF, the lower that patient’s inpatient hospital care costs are. We found that in 1994, patients with any of 12 DRGs commonly associated with posthospital SNF use had 4 to 21 percent shorter stays in hospitals with SNF units than patients with the same DRGs in hospitals without SNF units. Additionally, by owning a SNF, hospitals can increase their Medicare revenues through receipt of the full DRG payment for patients with shorter lengths of stay and a cost-based payment after the patients are transferred to the SNF. Rapid growth in SNF and home health expenditures has been accompanied by decreased, rather than increased, funding for program safeguard activities. For example, our March 1996 report found that part A contractor funding for medical review had decreased by almost 50 percent between 1989 and 1995. As a result, while contractors had reviewed over 60 percent of home health claims in fiscal year 1987, their review target had been lowered by 1995 to 3.2 percent of all claims (or even, depending on available resources, to a required minimum of 1 percent). We found that a lack of adequate controls over the home health program, such as little intermediary medical review and limited physician involvement, makes it nearly impossible to know whether the beneficiary receiving home care qualifies for the benefit, needs the care being delivered, or even receives the services being billed to Medicare. Also, because of the small percentage of claims now selected for review, home health agencies that bill for noncovered services are less likely to be identified than was the case 10 years ago. Similarly, the low level of review of SNF services makes it difficult to know whether the recent increase in ancillary use is medically necessary (for example, because patient mix has shifted toward those who need more services) or simply a way for SNFs to get more revenues. Finally, because relatively few resources are available for auditing end- of-year provider cost reports, HCFA has little ability to identify whether home health agencies or SNFs are charging Medicare for costs unrelated to patient care or other unallowable costs. Because of the lack of adequate program controls, it is quite possible that some of the recent increase in home health and SNF expenditures stems from abusive practices. The Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191), also known as the Kassebaum-Kennedy Act, has increased funding for program safeguards. However, per-claim expenditures will remain below the level in 1989, after adjusting for inflation. We project that, in 2003, payment safeguard spending as authorized by Kassebaum-Kennedy will be just over one-half of the 1989 per-claim level, after adjusting for inflation. The goal in designing a PPS is to ensure that providers have incentives to control costs and that, at the same time, payments are adequate for efficient providers to furnish needed services and at least recover their costs. If payments are set too high, Medicare will not save money and cost-control incentives can be weak. If payments are set too low, access to and quality of care can suffer. In designing a PPS, selection of the unit of service for payment purposes is important because the unit used has a strong effect on the incentives providers have for the quantity and quality of services they provide. Taking account of the varying needs of patients for different types of services— routine, ancillary, or all—is also important. A third important factor is the reliability of the cost and utilization data used to compute rates. Good choices for unit of service and cost coverage can be overwhelmed by bad data. We understand that the administration will propose a SNF PPS that would pay per diem rates covering all facility cost types and that payments would be adjusted for differences in patient case mix. Such a system is expected to be similar to HCFA’s ongoing SNF PPS demonstration project that is testing the use of per diem rates adjusted for resource need differences using the Resource Utilization Group, version III (RUG-III) patient classification system. This project was recently expanded to include coverage of ancillary costs in the prospective payment rates. An alternative to the proposal’s choice of a day of care as the unit of service is an episode of care—the entire period of SNF care covered by Medicare. While substantial variation exists in the amount of resources needed to treat beneficiaries with the same conditions when viewed from the day-of-care perspective, even more variation exists at the episode- of-care level. Resource needs are less predictable for episodes of care. Moreover, payment on an episode basis may result in some SNFs inappropriately reducing the number of covered days. Both factors make a day of care the better candidate for a PPS unit of service. Furthermore, the likely patient classification system, RUG-III, is designed for and being tested in a per diem PPS. On the other hand, a day-of-care unit gives few, if any, incentives to control length of stay, so a review process for this purpose would still be needed. The states and HCFA have a lot of experience with per diem payment methods for nursing homes under the Medicaid program, primarily for routine costs but also, in some cases, for total costs. This experience should prove useful in designing a per diem Medicare PPS. Regarding the types of costs covered by PPS rates, a major contributor to Medicare’s SNF cost growth has been the increased use of ancillary services, particularly therapy services. This, in turn, means that it is important to give SNFs incentives to control ancillary costs, and including them under PPS is a way to do so. However, adding ancillary costs does increase the variability of costs across patients and place additional importance on the case-mix adjuster to ensure reasonable and adequate rates. audits of a projectable sample of SNF cost reports. The results could then be used to adjust cost report databases to remove the influence of unallowable costs, which would help ensure that inflated costs are not used as the base for PPS rate setting. The summary of the administration’s proposal for a home health PPS is very general, saying only that a PPS for an appropriate unit of service would be established in 1999 using budget neutral rates calculated after reducing expenditures by 15 percent. HCFA estimates that this reduction will result in savings of $4.7 billion over fiscal years 1999 through 2002. The choice of the unit of service is crucial, and there is limited understanding of the need for and content of home health services to guide that choice. Choosing either a visit or an episode as the unit of service would have implications for both cost control and quality of care, depending on the response of home health agencies. For example, if the unit of service is a visit, agencies could profit by shortening the length of visits. At the same time, agencies could attempt to increase the number of visits, with the net result being higher total costs for Medicare, making the per-visit choice less attractive. If the unit of service is an episode of care over a period of time such as 30 or 100 days, agencies could gain by reducing the number of visits during that period, potentially lowering quality of care. For these reasons, HCFA needs to devise methods to ensure that whatever unit of service is chosen will not lead to increased costs or lower quality of care. If an episode of care is chosen as the unit of service, HCFA would need a method to ensure that beneficiaries receive adequate services and that any reduction in services that can be accounted for by past overprovision of care does not result in windfall profits for agencies. In addition, HCFA would need to be vigilant to ensure that patients meet coverage requirements, because agencies would be rewarded for increasing their caseloads. HCFA is currently testing various PPS methods and patient classification systems for possible use with home health care, and the results of these efforts may shed light on the unit-of-service question. We have the same concerns about the quality of HCFA’s home health care cost report databases for PPS rate-setting purposes as we do for the SNF database. Again, we believe that adjusting the home health databases, using the results of thorough cost report audits of a projectable sample of agencies, would be wise. We are also concerned about the appropriateness of using current Medicare data on visit rates to determine payments under a PPS for episodes of care. As we reported in March 1996, controls over the use of home health care are virtually nonexistent. Operation Restore Trust, a joint effort by federal and state agencies in several states to identify fraud and abuse in Medicare and Medicaid, found very high rates of noncompliance with Medicare’s coverage conditions in targeted agencies. For example, in a sample of 740 beneficiaries drawn from 43 home health agencies in Texas and 31 in Louisiana that were selected because of potential problems, some or all of the services received by 39 percent of the beneficiaries were denied. About 70 percent of the denials were because the beneficiary did not meet the homebound definition. Although these are results from agencies suspected of having problems, they illustrate that substantial amounts of noncovered care are likely to be reflected in HCFA’s home health care utilization data. For these reasons, it would also be prudent for HCFA to conduct thorough on-site medical reviews of a projectable sample of agencies to give it a basis to adjust utilization rates for purposes of establishing a PPS. The administration has also announced that it will propose requiring SNFs to bill Medicare for all services provided to their beneficiary residents except for physician and some practitioner services. We support this proposal as we did in a September 1995 letter to you, Mr. Chairman. We and the HHS Inspector General have reported on problems, such as overutilization of supplies, that can arise when suppliers bill separately for services for SNF residents. A consolidated billing requirement would make it easier for Medicare to identify all the services furnished to residents, which in turn would make it easier to control payments for those services. The requirement would also help prevent duplicate billings for supplies and services and billings for services not actually furnished by suppliers. In effect, outside suppliers would have to make arrangements with SNFs under such a provision so that nursing homes would bill for suppliers’ services and would be financially liable and medically responsible for the care. to work with the Subcommittee and others to help sort out the potential implications of suggested revisions. This concludes my prepared remarks, and I will be happy to answer any questions. For more information on this testimony, please call William Scanlon on (202) 512-7114 or Thomas Dowdal, Senior Assistant Director, on (202) 512-6588. Patricia Davis also contributed to this statement. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed Medicare's skilled nursing facility (SNF) and home health care benefits and the administration's forthcoming legislative proposals related to them. GAO noted that: (1) Medicare's SNF costs have grown primarily because a larger portion of beneficiaries use SNFs than in the past and because of a large increase in the provision of ancillary services; (2) for home health care costs, both the number of beneficiaries and the number of services used by each beneficiary have more than doubled; (3) a combination of factors led to the increased use of both benefits: (a) legislation and coverage policy changes in response to court decisions liberalized coverage criteria for the benefits, enabling more beneficiaries to qualify for care; (b) these changes also transformed the nature of home health care from primarily posthospital care to more long-term care for chronic conditions; (c) earlier discharges from hospitals led to the substitution of days spent in SNFs for what in the past would have been the last few days of hospital care, and increased use of ancillary services, such as physical therapy, in SNFs; and (d) a diminution of administrative controls over the benefits, resulting at least in part from fewer resources being available for such controls, reduced the likelihood of inappropriately submitted claims being denied; (4) the major proposals by the administration for both SNFs and home health care are designed to give the providers of these services increased incentives to operate efficiently by moving them from a cost reimbursement to a prospective payment system; (5) however, what remains unclear about these proposals is whether an appropriate unit of service can be defined for calculating prospective payments and whether the Health Care Financing Administration's databases are adequate for it to set reasonable rates; (6) the administration is also proposing that SNFs be required to bill for all services provided to their Medicare residents rather than allowing outside suppliers to bill; and (7) this latter proposal has merit, because it would make control over the use of ancillary services significantly easier.
Medicare falls within the administrative jurisdiction of the Health Care Financing Administration (HCFA) of the Department of Health and Human Services (HHS). HCFA establishes regulations and guidance for the program and contracts with about 72 private companies—such as Blue Cross and Aetna—to handle claims screening and processing and to audit providers. Each of these commercial contractors works with its local medical community to set coverage policies and payment controls. As a result, billing problems are handled, for the most part, by contractors, and they are the primary referral parties to law enforcement agencies for suspected fraud. Medicare’s basic nursing home benefit covers up to 100 days of certain posthospital stays in a skilled nursing facility. Skilled nursing facilities submit bills for which they receive interim payment; final payments are based on costs within a cost-limit cap. This benefit is paid under part A, Hospital Insurance, which also pays for hospital stays and care provided by home health agencies and hospices. Even if Medicare beneficiaries do not meet the conditions for Medicare coverage of a skilled nursing facility stay, they are still eligible for the full range of part B benefits. Although Medicaid or the resident may be paying for the nursing home, Medicare will pay for ancillary services and items such as physical and other types of therapy, prosthetics, and surgical dressings. Part B is voluntary part of the Medicare program that beneficiaries may elect and for which they pay monthly premiums. Part B also pays for physician care and diagnostic testing. About 6 million people have both Medicare and Medicaid coverage, and, of these, over 4.8 million represent state “buy-ins” for Medicare coverage.Dually eligible beneficiaries are among the most vulnerable Medicare beneficiaries. They are generally poor, have a greater incidence of serious and chronic conditions, and are much more likely to be institutionalized. As a matter of fact, about 1.4 million reside in institutions, while only 600,000 of the approximately 31 million Medicare beneficiaries without Medicaid coverage are in institutions. Over half of all dually eligible patients over 85 reside in nursing facilities. When a copayment is required, a Medicare beneficiary or a representative designated by the beneficiary, receives an “Explanation of Medicare Benefits” (EOMB), which specifies the services billed on behalf of the individual. The EOMB is an important document because beneficiaries and their families can use it to verify that the services were actually performed. The dually eligible population, however, often does not have a representative in the community to receive and review this document. In fact, many nursing home patients actually have the nursing home itself receive the EOMBs on their behalf. In 1996, Medicare spent $11.3 billion on skilled nursing facility benefits and an undetermined amount on part B ancillary services and items. The providers of these services and items can bill Medicare in a variety of ways. With this variety comes the opportunity to blur the transactions that actually took place and inflate charges for services rendered. Ancillary services and items for Medicare beneficiaries in nursing facilities can be provided by the nursing facility itself, a company wholly or partially owned by the nursing facility, or an independent supplier or practitioner. Our work has shown that independent providers and suppliers can bill Medicare directly for services or supplies without the knowledge of the beneficiary or the facility and companies that provide therapy are able to inflate their billings. Nursing facilities often do not have the in-house capability to provide all the services and supplies that patients need. Accordingly, outside providers market their services and supplies to nursing facilities to meet the needs of the facilities’ patients. HCFA’s reimbursement system allows these providers to bill Medicare directly without confirmation from the nursing facility or a physician that the care or items were necessary or delivered as claimed. As a result, the program is vulnerable to exploitation. representatives gain access to records not because they have any responsibility for the direct care of these patients, but solely to market their services or supplies. From these records, unscrupulous providers can obtain all the information necessary to order, bill, and be reimbursed by Medicare for services and supplies that are in many instances not necessary or even provided. In 1996, we reported the following examples: A group optometric practice performed routine eye examinations on nursing facility patients, a service not covered by Medicare. The optometrist was always preceded by a sales person who targeted the nursing facility’s director of nursing or its social worker and claimed the group was offering eye examinations at no cost to the facility or the patient. The nursing facility gave the sales person access to patients’ records, and this person then obtained the information necessary to file claims. Nursing staff would obtain physicians’ orders for the “free” examinations, and an optometrist would later arrive to conduct the examinations. The billings to Medicare, however, were for services other than eye examinations—services that were never furnished or were unnecessary. The owner of a medical supply company approached nursing facility administrators in several states and offered to provide supplies for Medicare patients at no cost to the facility. After reviewing nursing facility records, this company identified Medicare beneficiaries, obtained their Medicare numbers, developed lists of supplies on the basis of diagnoses, identified attending physicians, and made copies of signed physician orders in the files. The supplier then billed Medicare for items it actually delivered but also submitted 4,000 fraudulent claims for items never delivered. As part of the 1994 judgment, the owner forfeited $328,000 and was imprisoned and ordered to make restitution of $971,000 to Medicare and $60,000 to Medicaid. A supplier obtained a list of Medicare patients and their Medicare numbers from another supplier who had access to this information. The first supplier billed Medicare for large quantities of supplies that were never provided to these patients, and both suppliers shared in the approximately $814,000 in reimbursements. We found that nursing home staff’s giving providers or their representatives inappropriate access to patient medical records was a major contributing cause to the fraud and abuse cases we reviewed. Many nursing facilities rely on specialized rehabilitation agencies—also termed outpatient therapy agencies—to provide therapy services. These agencies can be multilayered, interconnected organizations—each layer adding costs to the basic therapy charge—that use outside billing services, which can also add to the cost. In those situations in which the nursing facility contracts and pays for occupational and speech therapy services for a Medicare-eligible stay, Medicare might pay the nursing facility what it was charged because of the limited amount of review conducted by claims processing contractors. In practice, however, because of the difficulty in determining what are reasonable costs and the limited resources available for auditing provider cost reports, there is little assurance that inflated charges are not actually being billed and paid. Until recently, HCFA had not established salary guidelines, which are needed to define reasonable costs for occupational or speech therapy. Without such benchmarks, it is difficult for Medicare contractors to judge whether therapy providers overstate their costs. Even for physical therapy, for which salary guidelines do exist, the Medicare-established limits do not apply if the therapy company bills Medicare directly. This is why Medicare has been charged $150 for 15 minutes of therapy when surveys show that average statewide salaries for therapists employed by hospitals and nursing facilities range from $12 to $25 per hour. Our analysis of a sample drawn from a survey of five contractors found that over half of the claims they received for occupational and speech therapy from 1988 to 1993 exceeded $172 in charges per service. Assuming this was the charge for 15 minutes of treatment—which industry representatives described as the standard billing unit—the hourly rate charged for these claims would have been more than $688. It should be noted that neither HCFA nor its contractors could accurately tell us what Medicare actually paid the providers in response to these claims. The amount Medicare actually pays is not known until long after the service is rendered and the claim processed. Although aggregate payments are eventually determinable, existing databases do not provide actual payment data for any individual claim. HCFA pays contractors to process claims and to identify and investigate potentially fraudulent or abusive claims. We have long been critical of the unstable funding support HCFA’s contractors have to carry out these program integrity activities. We recently reported that funding for Medicare contractor program safeguard activities declined from 74 cents to 48 cents per claim between 1989 and 1996. During that same period, the number of Medicare claims climbed 70 percent to 822 million. Such budgetary constraints have placed HCFA and its contractors in the untenable position of needing to review more claims with fewer resources. While Medicare contractors do employ a number of effective automated controls to prevent some inappropriate payments, such as suspending claims that do not meet certain conditions for payment for further review, our 1996 report on 70 fraud and abuse cases showed that atypical charges or very large reimbursements routinely escaped those controls and typically went unquestioned. The contractors we reviewed had not put any “triggers” in place that would halt payments when cumulative claims exceeded reasonable thresholds. Consequently, Medicare reimbursed providers, who were subsequently found guilty of fraud or billing abuses, large sums of money over a short period without the contractor’s becoming suspicious. The following examples highlight the problem: A supplier submitted claims to a Medicare contractor for surgical dressings furnished to nursing facility patients. In the fourth quarter of 1992, the contractor paid the supplier $211,900 for surgical dressing claims. For the same quarter a year later, the contractor paid this same supplier more than $6 million without becoming suspicious, despite the 2,800-percent increase in the amount paid. A contractor paid claims for a supplier’s body jackets that averaged about $2,300 per quarter for five consecutive quarters and then jumped to $32,000, $95,000, $235,000, and $889,000 over the next four quarters, with no questions asked. A contractor reimbursed a clinical psychology group practice for individual psychotherapy visits lasting 45 to 50 minutes when the top three billing psychologists in the group were allegedly seeing from 17 to 42 nursing facility patients per day. On many days, the leading biller of this group would have had to work more than 24 uninterrupted hours to provide the services he claimed. A contractor paid a podiatrist $143,580 for performing surgical procedures on at least 4,400 nursing facility patients during a 6-month period. For these services to be legitimate, the podiatrist would have had to serve at least 34 patients a day, 5 days a week. The Medicare contractors in these two cases did not become suspicious until they received complaints from family members, beneficiaries, or competing providers. The EOMB was critical in identifying the specific items and services being billed to Medicare. Although EOMBs have in the past only been required when the beneficiary had a deductible or copayment, HIPAA now requires HCFA to provide an explanation of Medicare benefits for each item or service for which payment may be made, without regard to whether a deductible or coinsurance may be imposed. This provision is still of limited value, however, for nursing home residents who designate the nursing home to receive the EOMBs—which is more common for the dually eligible population. In other cases, contractors initiated their investigations because of their analyses of paid claims (a practice referred to as “postpayment medical review”), which focused on those providers that appeared to be billing more than their peers for specific procedures. One contractor, for instance, reimbursed a laboratory $2.7 million in 1991 and $8.2 million in 1992 for heart monitoring services allegedly provided to nursing facility patients . The contractor was first alerted in January 1993 through its postpayment review efforts when it noted that this laboratory’s claims for monitoring services exceeded the norm for its peers. In all these cases, we believe the large increases in reimbursements over a short period or the improbable cumulative services claimed for a single day should have alerted the contractors to the possibility that something unusual was happening and prompted an earlier review. People do not usually work 20-hour days, and billings by a provider for a single procedure do not typically jump 13-fold from one quarter to the next or progressively double every quarter. Initiatives on various fronts are now under way to address fraud and abuse issues we have discussed here today. Several of these initiatives, however, are in their early stages, and it is too soon to assess whether they will, in fact, prevent fraud and abuse in the nursing facilities environment. Last year, we recommended that HCFA establish computerized prepayment controls that would suspend the most aberrant claims. HCFA has since strengthened its instructions to its contractors, directing them to implement prepayment screens to prevent payment of billings for egregious amounts or patterns of medically unnecessary services or items. HCFA also authorized its contractors to deny automatically the entire line item for any services that exceed the egregious service limits. In regard to therapy services, after a lengthy administrative process, HCFA proposed salary guidelines last month for physical, occupational, speech, and respiratory therapists who furnish care to beneficiaries under a contractual arrangement with a skilled nursing facility. The administration estimates these changes will result in savings to Medicare of $1.7 billion between now and the year 2001, and $3.9 billion between now and the year 2006. The proposed rule would revise the current guideline amounts for physical and respiratory therapies and introduce, for the first time, guideline amounts for occupational therapy and speech/language pathology services. In March 1995, the Secretary of HHS launched Operation Restore Trust (ORT), a 2-year interagency, intergovernmental initiative to combat Medicare and Medicaid fraud and abuse. ORT targeted its resources on three health care areas susceptible to exploitation, including nursing facility care in five states (California, Florida, Illinois, New York, and Texas) with high Medicare and Medicaid enrollment and rapid growth in billed services. overutilization of supplies, that can arise when suppliers bill separately for services for nursing home residents. A consolidated billing requirement would make it easier to control payments for these services and give nursing facilities the incentive to monitor them. The requirement would also help prevent duplicate billings and billings for services and items not actually provided. In effect, outside suppliers would have to make arrangements with skilled nursing facilities so that they would bill for suppliers’ services and would be financially liable and medically responsible for the care. HIPAA established the Medicare Integrity Program, which ensures that the program safeguard activities function is funded separately from other claims processing activities. HIPAA also included provisions on “administrative simplification.” A lack of uniformity in data among the Medicare program, Medicaid state plans, and private health entities often makes it difficult to compare programs, measure the true effect of changes in health care financing, and coordinate payments for dually eligible patients. For example, HIPAA requires, for the first time, that each provider be given a unique provider number to be used in billing all insurers, including Medicare and Medicaid. The new provisions also require the Secretary of HHS to promulgate standards for all electronic health care transactions; the data sets used in those transactions; and unique identifiers for patients, employers, providers, insurers, and plans. These standards will be binding on all health care providers, insurers, plans, and clearinghouses. The multiple ways that providers and suppliers can bill for services to nursing home patients and the lax oversight of this process contribute to the vulnerability of payments for the health care of this population. As a result, excessive or fraudulent billings may go undetected. We are encouraged, however, by the administration’s recent proposal for consolidated billing, which we believe will put more responsibility on nursing home staff to oversee the services and items being billed on behalf of residents. As more details concerning these or other proposals become available, we will be glad to work with the Subcommittee and others to help sort out their potential implications. This concludes my prepared remarks. I will be happy to answer any questions. For more information on this testimony, please call Leslie G. Aronovitz on (312) 220-7600 or Donald B. Hunter on (617) 565-7464. Lisanne Bradley also contributed to this statement. Medicare Post-Acute Care: Facility Health and Skilled Nursing Facility Cost Growth and Proposals for Prospective Payment (GAO/T-HEHS-97-90, Mar. 4, 1997). Skilled Nursing Facilities: Approval Process for Certain Services May Result in Higher Medical Costs (GAO/HEHS-97-18, Dec. 20, 1996). Medicare: Early Resolution of Overcharges for Therapy in Nursing Facilities Is Unlikely (GAO/HEHS-96-145, Aug. 16, 1996). Fraud and Abuse: Providers Target Medicare Patients in Nursing Facilities (GAO/HEHS-96-18, Jan. 24, 1996). Fraud and Abuse: Medicare Continues to Be Vulnerable to Exploitation by Unscrupulous Providers (GAO/T-HEHS-96-7, Nov. 2, 1995). Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements (GAO/HEHS-95-171, Aug. 8, 1995). Medicare: Reducing Fraud and Abuse Can Save Billions (GAO/T-HEHS-95-157, May 16, 1995). Medicare: Tighter Rules Needed to Curtail Overcharges for Therapy in Nursing Facilities (GAO/HEHS-95-23, Mar. 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the challenges that exist in combatting fraud and abuse in the nursing facility environment. GAO noted that: (1) while most providers abide by the rules, some unscrupulous providers of supplies and services have used the nursing facility setting as a target of opportunity; (2) this has occurred for several reasons: (a) the complexities of the reimbursement process invite exploitation; and (b) insufficient control over Medicare claims has reduced the likelihood that inappropriate claims will be denied; (3) GAO is encouraged by a number of recent efforts to combat fraud and abuse, the pending implementation of provisions in the Health Insurance Portability and Accountability Act (HIPPA) and a legislative proposal made by the administration; and (4) while these efforts should make a difference in controlling fraud and abuse in nursing homes, it is too early to tell whether these efforts will be sufficient.
DHS serves as the sector-specific agency for 10 of the sectors: information technology; communications; transportation systems; chemical; emergency services; nuclear reactors, material, and waste; postal and shipping; dams; government facilities; and commercial facilities. Other sector-specific agencies are the departments of Agriculture, Defense, Energy, Health and Human Services, the Interior, the Treasury, and the Environmental Protection Agency. (See table 1 for a list of sector-specific agencies and a brief description of each sector). The nine sector-specific plans we reviewed generally met NIPP requirements and DHS’s sector-specific plan guidance; however, the extent to which the plans met this guidance, and therefore their usefulness in enabling DHS to identify gaps and interdependencies across the sectors, varied depending on the maturity of the sector and on how the sector defines its assets, systems, and functions. As required by the NIPP risk management framework (see fig. 1), sector-specific plans are to promote the protection of physical, cyber, and human assets by focusing activities on efforts to (1) set security goals; (2) identify assets, systems, networks, and functions; (3) assess risk based on consequences, vulnerabilities, and threats; (4) establish priorities based on risk assessments; (5) implement protective programs; and (6) measure effectiveness. In addition to these NIPP risk management plan elements outlined above and according to DHS’s sector-specific plan guidance, the plans are also to address the sectors’ efforts to (1) implement a research and development program for critical infrastructure protection and (2) establish a structure for managing and coordinating the responsibilities of the federal departments and agencies—otherwise known as sector-specific agencies—identified in HSPD-7 as responsible for critical-infrastructure protection activities specified for the 17 sectors. Most of the plans included the required elements of the NIPP risk management framework, such as security goals and the methods the sectors expect to use to prioritize infrastructure, as well as to develop and implement protective programs. However, the plans varied in the extent to which they included key information required for each plan element. For example, all of the plans described the threat analyses that the sector conducts, but only one of the plans described any incentives used to encourage voluntary risk assessments, as required by the NIPP. Such incentives are important because a number of the industries in the sectors are privately owned and not regulated, and the government must rely on voluntary compliance with the NIPP. Additionally, although the NIPP called for each sector to identify key protective programs, three of the nine plans did not address this requirement. DHS officials told us that this variance in the plans can, in large part, be attributed to the levels of maturity and cultures of the sectors, with the more mature sectors generally having more comprehensive and complete plans than sectors without similar prior working relationships. For example, the banking and finance and energy sector plans included most of the key information required for each plan element. According to DHS officials, this is a result of these sectors having a history and culture of working with the government to plan and accomplish many of the same activities that are being required for the sector-specific plans. Therefore, these sectors were able to create plans that were more comprehensive and developed than those of less mature sectors, such as the public health and health care and agriculture and food sectors. The plans also varied in how comprehensively they addressed their physical, human, and cyber assets, systems, and functions because sectors reported having differing views on the extent to which they were dependent on each of these assets, systems, and functions. According to DHS’s sector-specific plan guidance, a comprehensive identification of such assets is important because it provides the foundation on which to conduct risk analysis and identify the appropriate mix of protective programs and actions that will most effectively reduce the risk to the nation’s infrastructure. Yet, only one of the plans—drinking water and water treatment—specifically included all three categories of assets. For example, because the communications sector limited its definition of assets to networks, systems, and functions, it did not, as required by DHS’s plan guidance, include human assets in its existing security projects and the gaps it needs to fill related to these assets to support the sector’s goals. In addition, the national monuments and icons plan defined the sector as consisting of physical structures with minimal cyber and telecommunications assets because these assets are not sufficiently critical that damaging or destroying them would interfere with the continued operation of the physical assets. In contrast, the energy sector placed a greater emphasis on cyber attributes because it heavily depends on these cyber assets to monitor and control its energy systems. DHS officials also attributed the difference in the extent to which the plans addressed required elements to the manner in which the sectors define their assets and functions. The plans, according to DHS’s Office of Infrastructure Protection officials, are a first step in developing future protective measures. In addition, these officials said that the plans should not be considered to be reports of actual implementation of such measures. Given the disparity in the plans, it is unclear the extent to which DHS will be able to use them to identify gaps and interdependencies across the sectors in order to plan future protective measures. It is also unclear, from reviewing the plans, how far along each sector actually is in identifying assets, setting priorities, and protecting key assets. DHS officials said that to make this determination, they will need to review the sectors’ annual progress reports, due in this month, that are to provide additional information on plan implementation as well as identify sector priorities. Representatives of 10 of 32 councils said the plans were valuable because they gave their sectors a common language and framework to bring the disparate members of the sector together to better collaborate as they move forward with protection efforts. For example, the government facilities council representative said that the plan was useful because relationships across the sector were established during its development that have resulted in bringing previously disjointed security efforts together in a coordinated way. The banking and finance sector’s coordinating council representative said that the plan was a helpful way of documenting the history, the present state, and the future of the sector in a way that had not been done before and that the plan will be a working document to guide the sector in coordinating efforts. Similarly, an energy sector representative said that the plan provides a common format so that all participants can speak a common language, thus enabling them to better collaborate on the overall security of the sector. The representative also said that the plan brought the issue of interdependencies between the energy sector and other sectors to light and provided a forum for the various sectors to collaborate. DHS’s Office of Infrastructure Protection officials agreed that the main benefit of these plans was that the process of developing them helped the sectors to establish relationships between the private sector and the government and among private sector stakeholders that are key to the success of protection efforts. However, representatives of 8 of the 32 councils said the plans were not useful to their sectors because (1) the plans did not represent a true partnership between the federal and private sectors or were not meaningful to all the industries represented by the sector or (2) the sector had already taken significant protection actions, thus, developing the plan did not add value. The remaining council representatives did not offer views on this issue. Sector representatives for three transportation modes—rail, maritime, and aviation—reported that their sector’s plan was written by the government and that the private sector did not participate fully in the development of the plan or the review process. As a result, the representatives did not believe that the plan was of value to the transportation sector as a whole because it does not represent the interests of the private sector. Similarly, agriculture and food representatives said writing the plan proved to be difficult because of the sector’s diversity and size—more than 2,000,000 farms, one million restaurants, and 150,000 meat processing plants. They said that one of the sector’s biggest challenges was developing a meaningful document that could be used by all of the industries represented. As a result of these challenges, the sector submitted two plans in December 2006 that represented a best effort at the time, but the sector council said it intends to use the remainder of the 2007 calendar year to create a single plan that better represents the sector. In contrast, the coordinating council representative for nuclear reactors, materials, and waste sector said that because the sector’s security has been robust for a long time, the plan only casts the security of the sector in a different light, and the drinking water and water treatment systems sector said that the plan is a “snapshot in time” document for a sector that already has a 30-year history of protection, and thus the plan did not provide added value for the sector. Officials at DHS’s Office of Infrastructure Protection acknowledged that these sectors have a long history of working together and in some cases have been doing similar planning efforts. However, the officials said that the effort was of value to the government because it now has plans for all 17 sectors and it can begin to use the plans to address the NIPP risk management framework. Representatives of 11 of 32 councils said the review process associated with the plans was lengthy. They commented that they had submitted their plans in advance of the December 31, 2006, deadline, but had to wait 5 months for the plan to be approved. Eight of them also commented that while they were required to respond within several days to comments from DHS on the draft plans, they had to wait relatively much longer during the continuing review process for the next iteration of the draft. For example, a representative of the drinking water and water treatment sector said that the time the sector had to incorporate DHS’s comments into a draft of the plan was too short—a few days—and this led the sector to question whether its members were valued partners to DHS. DHS’s Infrastructure Protection officials agreed that the review process had been lengthy and that the comment periods given to sector officials were too short. DHS officials said this occurred because of the volume of work DHS had to undertake and because some of the sector-specific agencies were still learning to operate effectively with the private sector under a partnership model in which the private sector is an equal partner. The officials said that they plan to refine the process as the sector-specific agencies gain more experience working with the private sector. Conversely, representatives from eight of 32 councils said the review process for the plans worked well, and five of these council representatives were complimentary of the support they received from DHS. The remaining council representatives did not offer views on this topic. For example, an information technology (IT) sector coordinating council representative said that the review and feedback process on their plan worked well and that the Office of Infrastructure Protection has helped tremendously in bringing the plans to fruition. However, sector coordinating council representatives for six sectors also voiced concern that the trusted relationships established between the sectors and DHS might not continue if there were additional turnover in DHS, as has occurred in the past. For example, the representative of one council said they had established productive working relationships with officials in the Offices of Infrastructure Protection and Cyber Security and Communications, but were concerned that these relationships were dependent on the individuals in these positions and that the relationships may not continue without the same individuals in charge at DHS. As we have reported in the past, developing trusted partnerships between the federal government and the private sector is critical to ensure the protection of critical infrastructure. Nine of 32 sector representatives said that their preexisting relationships with stakeholders helped in establishing and maintaining their sector councils, and two noted that establishing the councils had improved relationships. Such participation is critical to well-functioning councils. For example, representatives from the dams, energy, and banking and finance sectors, among others, said that existing relationships continue to help in maintaining their councils. In addition, the defense industrial base representatives said the organizational infrastructure provided by the sector councils is valuable because it allows for collaboration. Representatives from the national monuments and icons sector said that establishing the government sector council has facilitated communication within the sector. We also reported previously that long-standing relationships were a facilitating factor in council formation and that 10 sectors had formed either a government council or sector council that addressed critical infrastructure protection issues prior to DHS’s development of the NIPP. As a result, these 10 sectors were more easily able to establish government coordinating councils and sector coordinating councils under the NIPP model. Several councils also noted that the Critical Infrastructure Partnership Advisory Council (CIPAC), created by DHS in March 2006 to facilitate communication and information sharing between the government and the private sector, has helped facilitate collaboration because it allows the government and industry to interact without being open to public scrutiny under the Federal Advisory Committee Act. This is important because previously, meetings between the private sector and the government had to be open to the public, hampering the private sector’s willingness to share information. Conversely, seven sector council representatives reported difficulty in achieving and maintaining sector council membership, thus limiting the ability of the councils to effectively represent the sector. For example, the public health and health care sector representative said that getting the numerous sector members to participate is a challenge, and the government representative noted that because of this, the first step in implementing the sector-specific plan is to increase awareness about the effort among sector members to encourage participation. Similarly, due to the size of the commercial facilities sector, participation, while critical, varies among its industries, according to the government council representative. Meanwhile, the banking and finance sector representatives said that the time commitment for private sector members and council leaders makes participation difficult for smaller stakeholders, but getting them involved is critical to an effective partnership. Likewise, the IT sector representatives said engaging some government members in joint council meetings is a continuing challenge because of the members’ competing responsibilities. Without such involvement, the officials said, it is difficult to convince the private sector representatives of the value of spending their time participating on the council. Additionally, obtaining state and local government participation in government sector councils remains a challenge for five sectors. Achieving such participation is critical because these officials are often the first responders in case of an incident. Several government council representatives said that a lack of funding for representatives from these entities to travel to key meetings has limited state and local government participation. Others stated that determining which officials to include was a challenge because of the sheer volume of state and local stakeholders. DHS Infrastructure Protection officials said that the agency is trying to address this issue by providing funding for state and local participation in quarterly sector council meetings and has created a State, Local and Tribal and Territorial Government Coordinating Council (SLTTGCC)—composed of state, local, tribal, and territorial homeland security advisers—that serves as a forum for coordination across these jurisdictions on protection guidance, strategies, and programs. Eleven of the 32 council representatives reported continuing challenges with sharing information between the federal government and the private sector. For example, six council representatives expressed concerns about the viability of two of DHS’s main information-sharing tools—the Homeland Security Information Network (HSIN) or the Protected Critical Infrastructure Information (PCII) program. We reported in April 2007 that the HSIN system was built without appropriate coordination with other information-sharing initiatives. In addition, in a strategic review of HSIN, DHS reported in April 2007 that it has not clearly defined the purpose and scope of HSIN and that HSIN has been developed without sufficient planning and program management. According to DHS Infrastructure Protection officials, although they encouraged the sectors to use HSIN, the system does not provide the capabilities that were promised, including providing the level of security expected by some sectors. As a result, they said the Office of Infrastructure Protection is exploring an alternative that would better meet the needs of the sectors. In addition, three council representatives expressed concerns about whether information shared under the PCII program would be protected. Although this program was specifically designed to establish procedures for the receipt, care, and storage of critical infrastructure information submitted voluntarily to the government, the representatives said potential submitters continue to fear that the information could be inadequately protected, used for future legal or regulatory action, or inadvertently released. In April 2006, we reported that DHS faced challenges implementing the program, including being able to assure the private sector that submitted information will be protected and specifying who will be authorized to have access to the information, as well as to demonstrate to the critical infrastructure owners the benefits of sharing the information to encourage program participation. We recommended, among other things, that DHS better (1) define its critical-infrastructure information needs and (2) explain how this information will be used to attract more users. DHS concurred with our recommendations. In September 2006 DHS issued a final rule that established procedures governing the receipt, validation, handling, storage, marking, and use of critical infrastructure information voluntarily submitted to DHS. DHS is in the process of implementing our additional recommendations that it define its critical-infrastructure information needs under the PCII program and better explain how this information will be used to build the private sector’s trust and attract more users. To date, DHS has issued a national plan aimed at providing a consistent approach to critical infrastructure protection, ensured that all 17 sectors have organized to collaborate on protection efforts, and worked with government and private sector partners to complete all 17 sector-specific plans. Nevertheless, our work has shown that sectors vary in terms of how complete and comprehensive their plans are. Furthermore, DHS recognizes that the sectors, their councils, and their plans must continue to evolve. As they do and as the plans are updated and annual implementation reports are provided that begin to show the level of protection achieved, it will be important that the plans and reports add value, both to the sectors themselves and to the government as a whole. This is critical because DHS is dependent on these plans and reports to meet its mandate to evaluate whether gaps exist in the protection of the nation’s most critical infrastructure and key resources and, if gaps exist, to work with the sectors to address the gaps. Likewise, DHS must depend on the private sector to voluntarily put protective measures in place for many assets. It will also be important that sector councils have representative members and that the sector-specific agencies have buy-in from these members on protection plans and implementation steps. One step DHS could take to implement our past recommendations to strengthen the sharing of information is for the PCII program to better define its critical infrastructure information needs and better explain how this information will be used to build the private sector’s trust and attract more users. As we have previously reported, such sharing of information and the building of trusted relationships are crucial to the protection of the nation’s critical infrastructure. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at any time. For further information on this testimony, please contact Eileen Larence at (202) 512-8777 or by e-mail at larencee@gao.gov. Individuals making key contributions to this testimony include Susan Quinlan, Assistant Director; R. E. Canjar; Landis Lindsey; E. Jerry Seigler; and Edith Sohna. We assessed the sector specific plans (SSPs) using 8 criteria, consisting of 40 key information requirements. We extracted this information from the requirements included in the NIPP as well as on the detailed sector- specific plan guidance issued by DHS. Each criterion reflects a component DHS required for the completion of the SSP. The 8 criteria we used are listed below along with the corresponding 40 key information requirements. Section 1: Sector Profile and Goals 1. Did the sector include physical and human assets as part of its sector profile? 2. Does the SSP identify any regulations or key authorities relevant to the sector that affect physical and human assets and protection? 3. Does the SSP show the relationships between the sector specific agency and the private sector, other federal departments and agencies, and state and local agencies that are either owner/operators of assets or provide a supporting role to securing key resources? 4. Does the SSP contain sector-specific goals? 5. Does the SSP communicate the value of the plan to the private sector, other owners, and operators? Section 2: Identify Assets, Systems, Networks, and Functions 6. Does the SSP include a process for identifying the sector’s assets and functions, both now and in the future? 7. Does the SSP include a process to identify physical and human asset dependencies and interdependencies? 8. Does the SSP describe the criteria being used to determine which assets, systems, and networks are and are not of potential concern? 9. Does the SSP describe how the infrastructure information being collected will be verified for accuracy and completeness? 10. Does the SSP discuss the risk assessment process, including whether the sector is mandated by regulation or are primarily voluntary in nature. 11. Does the SSP address whether a screening process (process to determine whether a full assessment is required) for assets would be beneficial for the sector, and if so, does it discuss the methodologies or tools that would be used to do so? 12. Does the SSP identify how potential consequences of incidents, including worst case scenarios, would be assessed? 13. Does the SSP describe the relevant processes and methodologies used to perform vulnerability assessments? 14. Does the SSP describe any threat analyses that the sector conducts? 15. Does the SSP describe any incentives used to encourage voluntary performance of risk assessments? Section 4: Prioritize Infrastructure 16. Does the SSP identify the party responsible for conducting a risk-based prioritizing of the assets? 17. Does the SSP describe the process, current criteria, and frequency for prioritizing sector assets? 18. Does the SSP provide a common methodology for comparing both physical and human assets when prioritizing a sector’s infrastructure? Section 5: Develop and Implement Protective Programs 19. Does the SSP describe the process that the SSA will use to work with asset owners to develop effective long-term protective plans for the sector’s assets? 20. Does the SSP identify key protective programs (and their role) in the sector’s overall risk management approach? 21. Does the SSP describe the process used to identify and validate specific program needs? 22. Does the SSP include the minimum requirements necessary for the sector to prevent, protect, respond to, and recover from an attack? 23. Does the SSP address implementation and maintenance of protective programs for assets once they are prioritized? 24. Does the SSP address how the performance of protective programs is monitored by the sector-specific agencies and security partners to determine their effectiveness? Section 6: Measure Progress 25. Does the SSP explain how the SSA will collect, verify and report the information necessary to measure progress in critical infrastructure/key resources protection? 26. Does the SSP describe how the SSA will report the results of its performance assessments to the Secretary of Homeland Security? 27. Does the SSP call for the development and use of metrics that will allow the SSA to measure the results of activities related to assets? 28. Does the SPP describe how performance metrics will be used to guide future decisions on projects? 29. Does the SSP list relevant sector-level implementation actions that the SSA and its security partners deem appropriate? Section 7: Research and Development for Critical Infrastructure/Key Resources Protection 30. Does the SSP describe how technology development is related to the sector’s goals? 31. Does the SSP identify those sector capability requirements that can be supported by technology development? 32. Does the SSP describe the process used to identify physical and human sector-related research requirements? 33. Does the SSP identify existing security projects and the gaps it needs to fill to support the sector’s goals? 34. Does the SSP identify which sector governance structures will be responsible for R&D? 35. Does the SSP describe the criteria that are used to select new and existing initiatives? Section 8: Manage and Coordinate SSA Responsibilities 36. Does the SSP describe how the SSA intends to staff and manage its NIPP responsibilities? (e.g., creation of a program management office.) 37. Does the SSP describe the processes and responsibilities of updating, reporting, budgeting, and training? 38. Does the SSP describe the sector’s coordinating mechanisms and structures? 39. Does the SSP describe the process for developing the sector-specific investment priorities and requirements for critical infrastructure/key resource protection? 40. Does the SSP describe the process for information sharing and protection? This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As Hurricane Katrina so forcefully demonstrated, the nation's critical infrastructures--both physical and cyber--have been vulnerable to a wide variety of threats. Because about 85 percent of the nation's critical infrastructure is privately owned, it is vital that public and private stakeholders work together to protect these assets. The Department of Homeland Security (DHS) is responsible for coordinating a national protection strategy and has promoted the formation of government and private councils for the 17 infrastructure sectors as a collaborating tool. The councils, among other things, are to identify their most critical assets, assess the risks they face, and identify protective measures in sector-specific plans that comply with DHS's National Infrastructure Protection Plan (NIPP). This testimony is based primarily on GAO's July 2007 report on the sector-specific plans and the sector councils. Specifically, it addresses (1) the extent to which the sector-specific plans meet requirements, (2) the council members' views on the value of the plans and DHS's review process, and (3) the key success factors and challenges that the representatives encountered in establishing and maintaining their councils. In conducting the previous work, GAO reviewed 9 of the 17 draft plans and conducted interviews with government and private sector representatives of the 32 councils, 17 government and 15 private sector. Although the nine sector-specific plans GAO reviewed generally met NIPP requirements and DHS's sector-specific plan guidance, eight did not describe any incentives the sector would use to encourage owners to conduct voluntary risk assessments, as required by the NIPP. Most of the plans included the required elements of the NIPP risk management framework. However, the plans varied in how comprehensively they addressed not only their physical assets, systems, and functions, but also their human and cyber assets, systems and functions, a requirement in the NIPP, because the sectors had differing views on the extent to which they were dependent on each of these assets. A comprehensive identification of all three categories of assets is important, according to DHS plan guidance, because it provides the foundation on which to conduct risk analyses and identify appropriate protective actions. Given the disparity in the plans, it is unclear the extent to which DHS will be able to use them to identify security gaps and critical interdependencies across the sectors. DHS officials said that to determine this, they will need to review the sectors' annual reports. Representatives of the government and sector coordinating councils had differing views regarding the value of sector-specific plans and DHS's review of those plans. While 10 of the 32 council representatives GAO interviewed reported that they saw the plans as being useful for their sectors, representatives of eight councils disagreed because they believed the plans either did not represent a partnership among the necessary key stakeholders, especially the private sector or were not valuable because the sector had already progressed beyond the plan. In addition, representatives of 11 of the 32 councils felt the review process was too lengthy, but 8 thought the review process worked well. The remaining council representatives did not offer views on these issues. As GAO reported previously, representatives continued to report that their sector councils had preexisting relationships that helped them establish and maintain their sector councils. However, seven of the 32 representatives reported continuing difficulty achieving and maintaining sector council membership, thus limiting the ability of the councils to effectively represent the sector. Eleven council representatives reported continuing difficulties sharing information between the public and private sectors as a challenge, and six council representatives expressed concerns about the viability of the information system DHS intends to rely on to share information about critical infrastructure issues with the sectors or the effectiveness of the Protected Critical Infrastructure Information program--a program that established procedures for the receipt, care, and storage of information submitted to DHS. GAO has outstanding recommendations addressing this issue, with which DHS generally agreed and is in the process of implementing.
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued after transmittal of the president’s budget, provide a direct linkage between an agency’s longer term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ reported performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. GSA’s overall mission is to provide policy leadership and expert solutions in services, space, and products at the best value to enable federal employees to accomplish their work-related responsibilities. As part of this mission, GSA recognizes that it must provide federal agencies with the highest quality service at a competitive cost. In its September 2000 strategic plan, GSA discussed the major goals related to its mission, which are to promote responsible asset management, compete effectively for the federal market, excel at customer service, meet federal social and environmental objectives, and anticipate future workforce needs. For the three key selected outcomes—quality products and services are provided to federal agencies at competitive prices and significant price savings to the government; federal buildings are safe, accessible, and energy efficient; and federal buildings are adequately maintained—GSA’s fiscal year 2000 performance report indicated that GSA met or exceeded 21 of the 34 performance goals related to the 3 outcomes. For the remaining 13 goals, GSA did not meet 11 goals and was unable to measure 2 goals. In its report, GSA (1) typically described various strategies it planned to implement for achieving the unmet goals and (2) generally discussed the effects of fiscal year 2000 performance on estimated fiscal year 2001 performance for many goals. For such goals, the report discussed fiscal year 2000 performance and what performance could be expected in fiscal year 2001. In addition, the fiscal year 2002 performance plan included discussions of strategies for each of the goals that supported the three outcomes. As in fiscal year 1999, GSA’s performance report showed that it had achieved mixed results for this outcome in fiscal year 2000. GSA’s 31 performance goals for this outcome were typically outcome-oriented, measurable, and quantifiable. The goals addressed a wide range of issues involving products and services in such areas as supply and procurement, real property operations, vehicle acquisition and leasing, travel and transportation, information technology (IT), and telecommunications. GSA reported that it exceeded or met 19 of the 31 goals in fiscal year 2000 in such areas as leasing operations, real property disposal and operations, supply and procurement, vehicle acquisition and leasing, travel and transportation, personal property management, and network services. For the remaining 12 goals, GSA did not meet 10 goals and was unable to measure its performance on 2 goals. GSA cited reasons for not meeting or measuring the goals or explained that it was analyzing data to determine the reasons. GSA also discussed to some extent various approaches, including plans, actions, and time frames, to achieve most of the unmet goals. The unmet goals were in such areas as leasing and real property operations, supply and procurement, and vehicle acquisition and leasing; the unmeasured goals were in the vehicle acquisition and leasing and travel and transportation areas. As it did in the fiscal year 1999 report, GSA revised many goals and measures for this key outcome in the fiscal year 2000 performance report. The revisions ranged from updating target performance levels to broadening the scope of various goals to include services as well as products. In addition, in its fiscal year 2000 report, GSA described the effects of the fiscal year 2000 performance on the estimated fiscal year 2001 performance for 15 of the 31 goals related to this outcome. GSA’s fiscal year 2002 performance plan also had 31 goals related to this outcome. The plan had strategies for all the goals, which covered a wide range of activities that clearly described major steps to reach the goals. For example, to help achieve the goal of maximizing cost avoidance through reutilization and donation of excess federal personal property, GSA’s strategies included making the property visible through the Federal Disposal System, which is an information system that identifies available surplus property. Also, to achieve the goal of increasing the number of products and services available to federal customers on the Internet, GSA’s strategies included a requirement that starting in October 2001, all new schedule contractors had 6 months to include their products and services on GSA Advantage!™, the on-line service for obtaining products and services. For the goals related to this outcome, GSA discussed data validation and verification efforts in both the fiscal year 2000 report and the fiscal year 2002 plan. For the second key outcome, GSA’s fiscal year 2000 performance report, like the fiscal year 1999 report, had one goal related to building security. Specifically, the goal was to reduce the number of buildings that have costs in the high range of the benchmark set by private sector experts while maintaining effective security in government buildings. In addition to this goal, GSA discussed the issue of building security in a separate section of the performance report. The section explained that GSA is changing its approach from a reactive posture of patrol and incident response to a proactive stance of crime prevention and threat reduction. The section also said that GSA seeks to identify and reduce risk through automated risk assessment surveys and a comprehensive nationwide risk threat assessment. For the security goal, GSA had initially established a measure that would compare the agency’s protection costs with similar costs in the private sector. However, GSA’s fiscal year 2000 performance report recognized, as did its fiscal year 1999 report, that security could not be measured by costs alone. Thus, GSA did not use its initial cost-related measure but relied on customer satisfaction as an interim measure of the quality of protection services at government buildings while it developed a new measure. As it did in fiscal year 1999, GSA reported that it exceeded its fiscal year 2000 customer satisfaction target. The fiscal year 2000 report explained that GSA was developing a national security measure that is intended to assess the overall risk of threats to government buildings more comprehensively. The new threat assessment measure is being developed to consider the motives, opportunities, and means that outside groups or individuals may possess to threaten the security of government buildings. GSA also will include customer satisfaction in developing the measure. GSA’s fiscal year 2000 report said that this information is quantifiable and can be used to calculate risk scores for specific buildings. Building scores can be combined to establish a national threat assessment index, which can be used over time to help measure GSA’s efforts to reduce the level of threat or risk to government buildings. GSA anticipated implementing the new measure in fiscal year 2001. GSA’s fiscal year 2002 performance plan includes a new security goal related to its overall efforts to reduce threats to buildings. As part of this goal, GSA developed a regional threat composite index, which was designed to help identify and quantify the level of risk or threat to federal buildings located in specific geographical areas and assess GSA’s performance in reducing such threats. GSA expects that by fiscal year 2002, the regional indexes will be used to establish a national threat assessment index baseline. Strategies related to this goal clearly described major steps to reach the goal and included such efforts as obtaining timely criminal intelligence information, reducing the number of violent incidents, and partnering with security contractors. By developing and implementing the new security goal and its related measure, GSA has taken steps to address the recommendation in our June 2000 GPRA report. This recommendation called for GSA to develop security goals and measures that are more programmatic, that hold agency officials more accountable for results, and that allow GSA to determine if security strategies are working as intended. In addition, the plan continues to have a customer satisfaction goal, which includes such strategies as (1) using focus groups at buildings to help GSA better understand what is needed to improve customer satisfaction with security; and (2) sharing practices that have enhanced customer satisfaction scores among building managers, law enforcement security officers, and other building personnel nationwide. GSA’s fiscal year 2002 performance plan also included a goal related to the conservation of energy consumption in federal buildings. Executive Order 13123, dated June 3, 1999, stated that energy consumption is to be reduced by 35 percent by fiscal year 2010 compared with the 1985 baseline. In the fiscal year 2002 plan, GSA identified various energy conservation strategies, such as pursuing methods that would help GSA facilities to be recognized by DOE and EPA for achievements in effective environmental design and construction and using utility management techniques to enhance building operations’ efficiency. For the goals related to this outcome, GSA discussed data validation and verification efforts in both the fiscal year 2000 report and the fiscal year 2002 plan. Neither the report nor the plan included any performance goals directly related to federal building accessibility. For the third key outcome, GSA’s fiscal year 2000 performance report, like the fiscal year 1999 report, included two goals under this outcome, which showed mixed performance results. The goals, which were related to the timeliness of and cost controls over repairs and alterations to GSA buildings, were objective, measurable, and quantifiable. The measures generally indicated progress toward meeting the goals. GSA reported that for fiscal year 2000, its performance exceeded the cost control goal but did not meet the timeliness goal. For the unmet goal, GSA discussed reasons why the goal was not met and described actions it has taken to facilitate meeting the goal in the future. Although GSA did not specifically discuss the effects of fiscal year 2000 performance on estimated fiscal year 2001 performance for the two goals, it did say that it is planning to develop more comprehensive measures for each goal. We recently issued two reports that discussed some aspects of GSA’s efforts to maintain its buildings. Specifically, in March 2000 and April 2001, we reported, among other things, that GSA’s buildings needed billions of dollars for unfunded repairs and alterations; funding limitations were a major obstacle to reducing these needs; and serious consequences, including health and safety concerns, resulted from delaying or not performing repairs and alterations at some buildings. In its fiscal year 2002 performance plan, GSA included three goals related to this outcome. Two of these goals were similar to the goals in the fiscal year 2000 performance report, which involved improving the timeliness of building repairs and alterations and reducing cost escalations for repairs and alterations. In its fiscal year 2002 plan, GSA identified various strategies that clearly described major steps to be taken to achieve the two goals. For the goal related to improving the timeliness of repairs and alterations, GSA identified such strategies as implementing a Web-based program to streamline its building evaluation reports and optimizing the inventory tracking system to better monitor the backlog of work items. For the goal related to reducing cost escalations, GSA identified such strategies as (1) limiting project changes by obtaining up-front commitments from client agencies on the scope, schedules, and costs associated with building repairs and alterations; and (2) using design options that allow for adjusting repair and alteration projects to meet unforeseen events, such as budget reductions or higher-than-anticipated contractor bids. GSA’s fiscal year 2002 plan also had a third goal related to this outcome that involved estimating the government’s financial liabilities for environmental clean-up costs in its properties, such as owned and leased buildings. GSA stated that federal agencies are required to identify, document, and quantify the environmental financial liabilities related to all owned and leased properties within their inventories. In the fiscal year 2002 plan, GSA described its overall strategy for achieving this new goal. GSA explained its strategy as a multiphased approach; the first step of this approach will be to conduct “due care” assessments that will identify the federal properties that pose environmental hazards. GSA expects these assessments to be completed by 2002. For properties with documented environmental contamination, subsequent phases of the approach will involve identifying the nature and extent of such contamination. Using this information, GSA’s overall strategy is to establish environmental financial liability baselines that will help the agency set targets for reducing such liabilities in future years. For the goals related to this outcome, GSA discussed data validation and verification efforts in both the fiscal year 2000 report and the fiscal year 2002 plan. Generally, GSA’s fiscal year 2000 performance report and fiscal year 2002 performance plan had some significant differences that made the current documents more descriptive and informative than GSA’s fiscal year 1999 performance report and fiscal year 2001 performance plan. In addition to a more explicit discussion of approaches for achieving unmet goals and the effects of fiscal year 2000 performance on estimated fiscal year 2001 performance, the fiscal year 2000 report included expanded discussions of (1) the data sources that GSA relied on to measure performance for specific goals; and (2) the management challenges identified by GSA’s IG, which included two issues we identified as governmentwide high-risk areas—strategic human capital management and information security. Also, a recent study prepared by university researchers noted some overall improvement of GSA’s fiscal year 2000 performance report compared with its fiscal year 1999 report. Although GSA’s fiscal year 2002 performance plan was similar in some respects to the fiscal year 2001 plan, the fiscal year 2002 plan was a more informative document, primarily because it included more detailed discussions of GSA’s data validation and verification efforts and the management challenges identified by GSA’s IG. Also, the fiscal year 2002 plan contained new information that enhanced the plan, including discussions of (1) a new strategic goal related to meeting federal social and environmental objectives that was included in GSA’s September 30, 2000, strategic plan; (2) governmentwide reforms established by OMB; and (3) performance goals for three GSA staff offices that were not included in the fiscal year 2001 plan. The fiscal year 2000 performance report made strides toward addressing the recommendation in our June 2000 GPRA report that identified the need for better implementation of GPRA guidance. In contrast with its fiscal year 1999 performance report, GSA’s fiscal year 2000 report either discussed for all unmet goals the reasons why the goals were not achieved or explained that GSA was studying these matters. In addition, the report typically discussed the various approaches needed for achieving the goals in the future. Also, unlike the fiscal year 1999 report, the fiscal year 2000 report described the impact of fiscal year 2000 performance on estimated 2001 performance for many of the goals related to the three outcomes. The fiscal year 2000 performance report also included an enhanced discussion of data sources and the quality of data that GSA used to measure performance. Unlike the fiscal year 1999 performance report, the fiscal year 2000 report included an expanded discussion of the data sources used by its four major organizational components—the Public Buildings Service (PBS), Federal Supply Service (FSS), Federal Technology Service (FTS), and Office of Governmentwide Policy (OGP). For example, PBS identified a number of systems from which it obtained performance data, such as the System for Tracking and Administering Real Property, which is its primary source of real property data. In some cases, these discussions went a step beyond identifying systems and gave some information on data validity and verification. For example, PBS mentioned that its National Electronic and Accounting System is independently audited and has received an unqualified opinion for 13 consecutive years; its customer satisfaction measures from the Gallup Organization, a management consulting firm, come with a 95 percent statistical confidence level. In addition, FTS stated that it has purchased a system for collecting and evaluating performance measurement data and plans to implement the system in 2001. GSA stated in the report that it considers its performance data to be generally complete and reliable. However, GSA recognized that data improvements may be needed and said it is currently reviewing its data collection procedures. GSA’s efforts in this area are well founded because GSA’s IG recently reported that GSA has not implemented a system of internal controls to ensure that appropriate levels of management understand and are performing the necessary reviews of performance data to enable them to make assertions about the completeness and existence of the data and systems supporting the measures. Unlike the fiscal year 1999 performance report, GSA discussed the GSA IG’s management challenges in the fiscal year 2000 report. The six challenges were (1) management controls, (2) information technology solutions, (3) procurement activities, (4) human capital, (5) aging federal buildings, and (6) protection of federal facilities and personnel. The fiscal year 2000 report highlighted major issues related to the challenges and discussed GSA’s approaches to address them. Also, we noted that two of the six challenges addressed issues related to two governmentwide high- risk areas—strategic human capital management and information security—that were in our January 2001 high-risk update. The fiscal year 2000 report explained that GSA intended to address the management challenges more fully in its fiscal year 2002 performance plan, which is discussed later in this report. In May 2001, a study by university researchers cited overall improvement in GSA’s fiscal year 2000 performance report compared with its fiscal year 1999 report. The study, which was prepared by researchers who worked under the Mercatus Center’s Government Accountability Project at George Mason University, compared fiscal years 1999 and 2000 GPRA performance reports for 24 federal agencies primarily in the 3 areas of transparency, public benefits, and leadership. On the basis of numerical scores that the researchers assigned to the three areas, GSA’s fiscal year 2000 performance report showed improvement in all three areas over its fiscal year 1999 report. The improvements, which we also recognized, were related to such matters as (1) data sources, (2) explanations of why GSA failed to meet various performance goals, and (3) management challenges. In some respects, GSA’s fiscal year 2002 performance plan was similar to the fiscal year 2001 plan. Both plans discussed such matters as (1) GSA’s overall mission, strategic plan, and related strategic goals; and (2) performance goals with related measures and strategies to achieve the goals, links to GSA’s budget, and data validation and verification efforts. Also, both performance plans provided highlights of the extent to which its four major organizational components—PBS, FSS, FTS, and OGP— contributed to the accomplishment of GSA’s overall mission. In addition, we noted that both the fiscal year 2001 and fiscal year 2002 plans included information about cross-cutting issues, which are issues in which GSA’s organizational components work collaboratively with each other and with other federal agencies outside GSA. For example, FSS and PBS collaborate in meeting customers’ real and personal property needs in dealing with relocations or setting up new office facilities. Another example involved FSS’ work with DOE and EPA to make it easier for agencies to comply with the requirements of environmentally related Executive Orders. GSA’s fiscal year 2001 and 2002 plans discussed evaluations and studies of agency programs. For example, FSS included in both plans information on various ongoing and completed program evaluations and major studies, which are generally intended to help FSS determine how it can best accomplish its overall mission of providing supplies and services to federal agencies. These evaluations and studies covered a wide range of topics, such as providing efficient and effective supply chains that can best meet customers’ needs; maintaining appropriate controls over various purchases associated with GSA vehicles, such as fuel; and monitoring the quality of contractor-performed audits of transportation bills. We also identified some differences between the two plans that enhanced the fiscal year 2002 plan and made it a more descriptive and informative document compared with the fiscal year 2001 plan. Most notably, these differences involved expanded and more explicit discussions of data validation and verification and management challenges. We also noted that the fiscal year 2002 plan contained some new information that enhanced the plan, including discussions of a new strategic goal related to meeting federal social and environmental objectives that was included in GSA’s September 30, 2000, strategic plan; efforts to implement governmentwide reforms established by OMB; and performance goals for the three GSA staff offices of CFO, CIO, and CPO that were not included in the fiscal year 2001 plan. The fiscal year 2002 plan included an expanded discussion of GSA’s data validation and verification activities. In fact, GSA added an agencywide data validation and verification section to the plan that discusses, among other things, general controls and procedures used to validate and verify data. In discussing this issue, GSA described the types of performance data used, procedures for collecting such data, controls to help verify and validate each type of data used, and efforts to increase confidence in the data. For example, GSA explained that it has undertaken an extensive effort to review, certify, and clean up data in its larger computer systems, such as PBS’ System for Tracking and Administering Real Property, to help ensure that the systems operate as intended. In addition, GSA stated that it helps maintain data quality through ongoing staff training. Also, GSA stated that for its manual or smaller computer systems, the importance of data confirmation is stressed, which involves having more than one person responsible for the data. GSA’s fiscal year 2002 plan also included a more explicit discussion of its efforts to address the six management challenges that GSA’s IG identified. In discussing the challenges, GSA generally recognized the importance of continued attention to the challenges and described its overall efforts to address them. For example, in discussing the human capital challenge, GSA described various programs, such as a succession plan for PBS leadership designed to help ensure that GSA can continue to meet its future responsibilities despite impending employee turnover due to retirements. Also, in discussing the challenge of dealing with aging federal buildings, GSA explained that its first capital priority is to fund repairs and alterations for its buildings and said it is currently studying ways to better determine the appropriate level of funding for the repair and alteration program. In addition, the fiscal year 2002 plan included more performance goals that appeared to be related to the management challenges, including the issues of strategic human capital management and information security, which we identified as governmentwide high-risk areas. Also, the plan included a new goal that involved federal building security, which appears to respond to the recommendation in our June 2000 GPRA report that GSA develop security goals and measures. In addition, we noted that in GSA’s fiscal year 2002 performance plan, new information was included that enhanced the plan. For instance, the plan discusses a new strategic goal related to meeting federal social and environmental objectives, which was included in GSA’s September 30, 2000, strategic plan. Overall, this goal is aimed at fulfilling the intent of socioeconomic laws and executive orders and helping GSA’s customers to do so as well. As part of this strategic goal, GSA stated that it takes steps to safeguard the environment and conserve energy, help the disabled and disadvantaged to become more productive, consider the environment in its business decisions, and use natural resources in a sustainable manner. In the fiscal year 2002 plan, GSA established some performance goals that are related to this strategic goal, which involved, among other things, providing opportunities for small businesses and minority- and women- owned businesses to obtain GSA contracts. Also, the fiscal year 2002 performance plan discusses GSA’s ongoing and planned efforts to implement five governmentwide reforms established by OMB. In a February 14, 2001, memorandum to the heads and acting heads of federal departments and agencies, OMB explained that in order to help achieve the President’s vision of improving government functions and achieving operational efficiencies, agencies should include in their fiscal year 2002 plans some performance goals related to the five reforms that would significantly enhance agencies’ administration and operation. These reforms are delayering management levels to streamline organizations, reducing erroneous payments to beneficiaries and other recipients of government funds, making greater use of performance-based contracts, expanding the application of on-line procurement and other e-government services and information, and expanding OMB Circular A-76 competitions and more accurate inventories as required by the Federal Activities Inventory Reform (FAIR) Act. GSA identified various performance goals that focused on implementing some of the governmentwide reforms. For example, for the reform that deals with expanding the application of on-line procurement and other e- government services and information, GSA stated that it established Federal Business Opportunities, also known as FedBizOpps, to provide government buyers with convenient, universal access for posting and obtaining information about acquisitions on the Internet. GSA said that the establishment of FedBizOpps is discussed under its performance goal for providing a “single point of entry” to vendors that wish to do business with the federal government. In some instances, GSA did not identify performance goals that addressed the reforms, but it provided reasons for not doing so. For example, for the reform concerning the reduction of erroneous payments, GSA explained that it has not yet established performance goals related to this reform but plans to establish such goals in next year’s performance plan. Also, GSA’s fiscal year 2002 plan included performance goals for three staff offices that were not in the fiscal year 2001 plan. Responsibility for these goals falls within the jurisdiction of three staff offices that report directly to GSA’s Administrator; these are the offices of CFO, CIO, and CPO. The plan had 10 goals for these offices that covered (1) financial matters that CFO oversees, such as electronic collections and payments of invoices; (2) information technology matters that CIO oversees, such as costs and schedules associated with information technology capital investment projects; and (3) human capital matters that CPO oversees, such as the use of on-line university training courses to help improve employee skills. It should be noted that 5 of the 10 goals appeared to be related to the 2 areas of strategic human capital management and information security, which we identified as governmentwide high-risk areas The following section provides more information on GSA’s efforts to address the two high-risk areas. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding the first area, we noted that GSA’s fiscal year 2000 performance report discussed actions it has taken or plans to take to address strategic human capital management issues, which primarily involved training and developmental opportunities for employees. Also, we noted that GSA’s fiscal year 2002 plan had goals and measures related to strategic human capital management matters, which involved such activities as training and developing employees and improving the cycle time for recruiting. Regarding information security, we noted that GSA’s fiscal year 2000 performance report did not identify actions to address information security issues. However, our analysis showed that GSA’s fiscal year 2002 plan had a goal and measure related to information security, which involved GSA’s efforts to resolve in a timely manner all high-risk vulnerabilities and conditions detected by audits and reviews. The plan also states that FTS has an Office of Information Security, which provides federal agencies with services that are designed to develop a secure government information infrastructure. A more detailed discussion of GSA’s efforts to address the two high-risk areas identified by GAO, along with the GSA IG’s management challenges, can be found in appendix I. Our analysis indicates that both the fiscal year 2000 performance report and fiscal year 2002 performance plan were more informative and useful documents than GSA’s prior year report and plan. As we recommended in our June 2000 GPRA report, GSA’s fiscal year 2000 report and fiscal year 2002 plan responded more fully to GPRA implementing guidance and made a concerted effort to address the issue of building security. We recognize that tracking and reporting on intended performance results is an iterative process and that GSA needs to continually review and adjust its plans and reports to be responsive to an ever-changing environment. Given the complexities associated with preparing GPRA plans and reports, it is our view that GSA is making overall progress in responding to the annual GPRA planning and reporting requirements. Therefore, we are not making additional recommendations at this time. As agreed, our evaluation was generally based on the requirements of GPRA; the Reports Consolidation Act of 2000; guidance to agencies from OMB for developing performance plans and reports, including OMB Circular A-11, Part 2; previous reports and evaluations by us and others; our knowledge of GSA’s operations and programs; our identification of best practices concerning performance planning and reporting; and our observations on GSA’s other GPRA-related efforts. We also discussed our review with officials in GSA’s Office of the Chief Financial Officer and Office of the Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Committee on Governmental Affairs as important mission areas for the agency and generally reflect the outcomes for GSA’s key programs and activities. We examined and reviewed all performance goals in GSA’s fiscal year 2000 report and focused on those goals that were directly related to the three key outcomes. Also, we reviewed the fiscal year 2000 report and fiscal year 2002 plan and compared them with the agency’s prior year performance report and plan for these outcomes. In addition, we reviewed the fiscal year 2000 report and fiscal year 2002 plan for information related to the major management challenges confronting GSA that were identified by GSA’s Office of the Inspector General in November 2000. These challenges included the issues of strategic human capital management and information security, which GAO identified as governmentwide high-risk areas in our January 2001 performance and accountability series and high-risk update. We did not independently verify the information contained in GSA’s fiscal year 2000 performance report and fiscal year 2002 performance plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of GSA’s performance data. We conducted our review from April through June 2001 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from GSA’s Administrator. On July 25, 2001, GSA officials in the Office of the Chief Financial Officer provided us oral comments on a draft of this report. Specifically, GSA’s Deputy Budget Director and the Managing Director for Planning told us that they agreed with the contents of the report. Also, the officials told us that the name of FTS’ Office of Information Security has been changed to the Office of Information Assurance and Critical Infrastructure Protection. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Administrator, GSA; and the Director, OMB. Copies will also be made available to others upon request. If you or your staff have any questions, please call me at (202) 512-8387 or notify me at ungarb@gao.gov. Key contributors to this report were William Dowdal, Anne Hilleary, David Sausville, and Gerald Stankosky. The following table identifies the six major management challenges confronting the General Services Administration (GSA), which were identified by GSA’s Inspector General (IG). Two of the six challenges also addressed two issues—strategic human capital management and information security—that GAO identified as governmentwide high-risk areas. The first column lists the challenges identified by GSA’s IG and highlights the two agency challenges—human capital and information technology solutions—that addressed issues related to our two governmentwide high-risk areas. The second column discusses GSA’s progress in resolving its challenges, which was discussed in the agency’s fiscal year 2000 performance report. The third column discusses the extent to which GSA’s fiscal year 2002 performance plan includes performance goals and measures to address the two high-risk areas that GAO identified and the management challenges that GSA’s IG identified. In reviewing GSA’s fiscal year 2000 performance report and fiscal year 2002 performance plan, we found that both documents included expanded discussions of the GSA IG’s challenges, which represented a general improvement over the fiscal year 1999 report and fiscal year 2001 plan. In the fiscal year 2000 report and the fiscal year 2002 plan, GSA recognized the importance of continued attention to the challenges and described overall efforts to address them. Furthermore, GSA’s fiscal year 2000 report and fiscal year 2002 plan included various goals that appeared to be related to most or all of the challenges. Specifically, the performance report contained various goals that appeared to be related to four of the six challenges, and the performance plan had goals and measures that appeared to be related to all six challenges.
This report reviews the General Services Administration's (GSA) performance report for fiscal year 2000 and its performance plan for fiscal year 2002 to assess GSA's progress in achieving key outcomes important to its mission. GAO found that some goals were met or exceeded and others were not met. For fiscal year 2002, GSA set up a strategy to better meet these goals. Overall, GSA's fiscal year 2000 performance report and fiscal year 2002 plan were more informative and useful than its report and plan from last year.
Although there is no generally agreed upon definition of partnering, for purposes of this report, partnering arrangements include, but are not limited to (1) use of public sector facilities and employees to perform work or produce goods for the private sector; (2) private sector use of public depot equipment and facilities to perform work for either the public or private sector; and (3) work-sharing arrangements, using both public and private sector facilities and/or employees. Work-sharing arrangements share similar characteristics to the customer-supplier partnerships on which we have previously reported. Partnering arrangements exclude the normal service contracting arrangements where contract personnel are used to supplement or assist depot personnel in performing work in depot facilities. DOD spends about $13 billion, or 5 percent of its $250 billion fiscal year 1997 budget, on depot maintenance, which includes repair, rebuilding, and major overhaul of weapon systems, including ships, tanks, and aircraft. The Army has five depots managed by the Industrial Operations Command (IOC), and the Air Force has five depots managed by the Air Force Materiel Command (AFMC). The Navy’s three aviation depots and four shipyards are managed by the Naval Air and Sea Systems Commands. Also, a significant amount of depot repair activities is performed at various private contractor facilities. Depots operate through a working capital fund. The fund is used to finance a depot’s cost of producing goods and services for its customers. The fund is reimbursed through customer payments for the goods and services provided and is to be self-sustaining and operate on a break-even basis over the long term. Defense spending and force structure reductions during the 1980s and 1990s resulted in substantial excess capacity in both public and private sector industrial repair and overhaul facilities. Some of DOD’s excess depot maintenance capacity has been reduced through the base realignment and closure process. However, the services and the private sector continue to have large industrial facilities and capabilities that are underused. We have reported and testified that reducing such excess capacity and resulting inefficiencies could save hundreds of millions of dollars each year. Navy officials state that they have already significantly reduced excess capacity by closing three of six aviation depots and four of eight shipyards. To address its excess capacity problem, DOD continues to seek legislative authority for additional base closures under a base realignment and closure type process. However, due to congressional concerns over local social and economic impacts of such closures and questions regarding the savings and experiences from previous closures, such authority has not been provided. There is also a continuing debate between the Congress and the administration over where and by whom the remaining depot workloads will be performed. Central to this debate has been DOD’s efforts to rely more on the private sector for depot maintenance and statutory provisions that (1) require public-private competitions for certain workloads, (2) limit private sector workloads to 50 percent of the available funding for a particular fiscal year, and (3) require maintaining certain core capabilities in the public depots. DOD, the Congress, and the private sector have shown an interest in partnering arrangements as another tool to address the problems of excess capacity and declining workloads. DOD agrees with partnering concepts and discusses partnering in both the Defense Planning Guidance, which contains guidance for the services to develop their strategic plans, and in the fourth comprehensive Quadrennial Defense Review, a report required by the Military Force Structure Review Act of 1996, which was included in the National Defense Authorization Act for Fiscal Year 1997. In the Defense Planning Guidance, DOD directs the services to encourage commercial firms to enter into partnerships with depots to reduce excess capacity, overhead burdens, and maintain critical skills. In the Quadrennial Defense Review, DOD states that it will use in-house facilities to partner with industry to preserve depot-level skills and use excess capacity. A number of statutory provisions enacted primarily during the 1990s provide, within limitations, the authority and framework for partnering. Specifically, provisions in title 10 permit working capital funded activities, such as public depots, within specified limits, to sell articles and services to persons outside DOD and to retain the proceeds. Central among these limitations is that any goods or services sold by the depots must not be available commercially. Also, the National Defense Authorization Act for Fiscal Year 1995 authorized the Secretary of Defense to conduct activities to encourage commercial firms to enter into partnerships with depots. Further, section 361 of the National Defense Authorization Act for Fiscal Year 1998, provides that the Secretary of Defense shall enable public depots to enter into public-private cooperative arrangements, which shall be known as “public-private partnerships” for the purpose of maximizing the utilization of the depots’ capacity. However, the 1998 Authorization Act does not appear to have expanded the services’ ability to enter into such arrangements since section 361 did not contain any specific sales or leasing authority for use in partnering. Table 1 shows the major provisions in title 10, along with relevant sections in the 1995 and 1998 National Defense Authorization Acts, which facilitate partnering. The Army and the Air Force, for various reasons, view partnering arrangements differently. The Army believes that there are substantial opportunities within its legal authority to enter into contractual arrangements with private sector companies for the sale of goods and services. It has entered into a number of such arrangements using this authority. The Air Force believes such opportunities are very limited and has not entered into any such arrangements. The Army has entered into partnering arrangements under the legislation covering sales of goods and services. A sales arrangement is a contract between a depot and a private firm whereby a depot provides specific goods and services. The Army has designated which depots may sell articles and service outside of DOD and has issued specific implementing guidance. In 1995, the U.S. Army Depot Systems Command (now IOC) issued policy guidance for its facilities to enter into sales, subcontracts, and teaming arrangements with private industry. In July 1997, IOC developed the criterion for determining commercial availability. Under the criterion, a customer must certify that the good or service is not reasonably available in sufficient quantity or quality in the commercial market to timely meet its requirements. Cost cannot be a basis for determining commercial availability. The Army has also entered into a number of work-sharing arrangements that do not require specific legislative authority. They differ from a sales arrangement in that there is no contract between a depot and a private firm. The Air Force has not approved any proposed partnering arrangements. The Secretary of Defense has delegated to the Secretary of the Air Force the authority to designate which depots may sell articles and services outside of DOD. However, the Air Force Secretary has not made any such designations nor developed criteria to determine whether a good or service is available from a domestic commercial source. Air Force officials state that 10 U.S.C. 2553, like the corresponding Army sales statute (10 U.S.C. 4543), prohibits the Air Force from selling articles or services if those articles or services are available from a domestic commercial source. However, unlike the Army, Air Force officials believe the restriction prohibits the sale of almost any product or service their depots could provide. Army depots have entered into a number of partnering arrangements under the current statutory framework and within the context of the public-private workload mix for depot maintenance. These arrangements include sales under 10 U.S.C. 4543 and subcontracting under 10 U.S.C. 2208(j). Red River, Tobyhanna, and Anniston Army Depots all have ongoing arrangements with private industry to provide services such as testing and repair of communications equipment; development of training devices; testing of circuit card assemblies; and overhaul, conversion, and grit blasting of tracked vehicles. For example, table 2 lists sales statute partnering initiatives that are underway at the Anniston depot as of July 1997. In each of these sales arrangements, the Army has awarded the private sector company a contract to perform a certain scope of work. The contractor then makes a business decision to have the depot perform a portion of that work under the sales statutes. The sale is accomplished by a contract between the depot and the private sector firm that allows the depot to be reimbursed for costs associated with fulfilling the contract. These costs are estimated by maintenance personnel and are based on direct labor, materials, and in-house support costs. The contractor must pay the depot in advance for performing the service, and the depot reimburses its working capital fund to cover these estimated costs. For illustrative purposes, the FOX vehicle upgrade and M113 grit blast/test track partnering arrangements are described in more detail below. Following award of the FOX vehicle upgrade contract to General Dynamics Land Systems, Anniston representatives informed the contractor that the depot had facilities and capabilities that could meet the contractor’s needs and provide for substantial facility cost savings and other benefits. In January 1997, officials from Anniston and General Dynamics Land Systems agreed to partner on the upgrade of 62 FOX reconnaissance vehicles. The partnering agreement included a 4-year contract with the depot under 10 U.S.C. 4543. Under the contract, the depot performs asbestos removal, grinding, welding, machining, cleaning and finishing, and prime and final paint operations. Under the terms of the contract with the Army, General Dynamics Land Systems does the upgrade using the depot’s facilities. Depot facilities are provided to General Dynamics Land Systems as government-furnished property under its contract with the Army and revert back to the Army when the contract is complete. Depot personnel stated that this partnering arrangement has resulted in (1) a lower total cost for the combined work performed, (2) sustainment of core depot capabilities, and (3) overhead savings from using underutilized facilities. The depot has received about $1 million for its efforts on the first eight vehicles. The contractor stated that this project is a good example of a mutually beneficial program; the contractor reports that it would have cost more to perform the depot’s share of the work at another location. The contractor also reports that it is spending $450,000 to upgrade buildings at the depot and that it will occupy 27,000 square feet of otherwise vacant or underutilized space. A General Dynamics Land Systems official stated that by occupying space at the Anniston depot there was a savings to the program cost. The partnering arrangement on the M113 grit blast/test track project was entered into under 10 U.S.C. 4543 and 2208(j). The Army was seeking a way to meet its fielding schedule for the M113 and asked United Defense Limited Partnership if it could partner with the Anniston depot to help meet fielding requirements. Under this partnering arrangement, United Defense Limited Partnership contracted with the depot to perform grit blasting on the vehicle hulls and the depot provided use of its test track facilities pursuant to a subcontract with the contractor under 10 U.S.C. 2208(j). Army officials stated that this partnership will allow them to meet the fielding schedule and reduce overall program costs. Contractor officials stated by using the depot’s grit blasting and test track facilities, the need to build facilities to perform these functions was negated. The Army and private sector defense firms have established noncontractual partnering relationships by sharing workloads. Army program managers generally determine the mix of work between depots and private sector contractors. On any particular workload, either a depot or a private sector firm could receive all or part of the work. Under the Army’s work-sharing partnering arrangements, a depot and a contractor share specific workloads, based on each party’s strengths. The private sector firms’ share of the workload is performed pursuant to a contract with the activity supporting the program. Thus, there are no contracts directly between depots and private sector firms; however, there are memorandums of understanding and detailed agreements on how the partnerships will operate. These agreements generally provide mechanisms to mitigate risks, mediate disputes, and standardize work processes. Discussion of such arrangements at Anniston and Letterkenny depots follows. General Dynamics Land Systems, the original equipment manufacturer for the Abrams tank, and Anniston entered into a work-share partnering arrangement to upgrade the tank. Anniston and the contractor jointly initiated the Abrams Integrated Management XXI program in 1993 to mitigate a number of problems, including a declining depot-level maintenance workload, limited production of new Abrams tanks, and fleet sustainment. The goal of this arrangement was to unite the tank industrial base expertise in armored vehicle restoration, make needed improvements, and extend the life of the fleet while reducing the dollars required to support the fleet. The Army approved the arrangement based on its objectives and projected benefits and awarded General Dynamics Land Systems a contract on a sole-source basis for its share of the work. Under this arrangement, the depot disassembles the vehicles, prepares the hull and turret for reassembly, and performs component restoration and overhaul, and then the contractor uses these components for assembly, system integration, and testing. According to depot officials, this partnering strategy retains core capabilities by allowing the depot to maintain its current skill base and reduces overhead costs through additional labor hours. A contractor representative cited benefits from the partnering arrangement such as developing new programs and creating additional business opportunities. The Paladin program is a work-share partnering arrangement between Letterkenny Army Depot and United Defense Limited Partnership. In 1991, the Army determined that full-scale production of the Paladin, a self-propelled howitzer, would be maintained within the private sector. However, due to factors such as cost growth and quality concerns, potential offerors were encouraged to use government facilities to the maximum extent practical. United Defense Limited Partnership proposed that the Letterkenny depot partner with it on reconfiguring the Paladin, which would include the contractor doing its portion of the work at the depot. United Defense Limited Partnership won the contract in April 1993, and the “Paladin Enterprise” was formed in May 1993. Both parties signed a memorandum of understanding that established the roles and rules of the partnership. Under this arrangement, the depot performs chassis and armament overhaul, modification, and conversion to the new configuration. The contractor is required to provide most of the Paladin-unique chassis components, a new turret, subsystems for automatic fire control, and the integration of all components. According to depot officials, all participants in this arrangement are benefiting from the dual use of the depot. Specifically, depot officials reported that collocating the contractor at the depot has resulted in numerous savings, including $15 million in cost avoidance by eliminating material processing through the Defense Logistics Agency, and renovation of a government warehouse at the contractor’s expense valued at $3.4 million. Contractor representatives stated that this arrangement has allowed the contractor to remain in the tracked vehicle market and to retain critical skills and technology that will be needed when DOD resumes new vehicle production. The contractor is looking for additional partnering opportunities and believes that its experience with Paladin will enhance its ability to partner on future contracts. None of the Army’s partnering arrangements reviewed included the leasing of excess or nonexcess depot equipment or facilities as permitted under sections 2471 and 2667 of title 10. However, there are a number of partnering arrangements in which depot facilities are provided to contractors as government-furnished property for the performance of the contracts. The Air Force has not approved several proposals for its depots to provide products or services to the private sector. For example, in January 1997, ABB Autoclave Systems, Inc., on behalf of Porsche Engineering Services, requested the use of Warner Robins Air Logistics Center’s fluid cell press to form door panels. The press manufacturer stated that the depot and Cessna had the only fluid cell presses with the table size needed to produce these door panels. However, the Cessna press was not available. The Center’s Commander requested approval from AFMC to enter into this partnering arrangement with Porsche. In April 1997, AFMC denied the request because it believed that it did not have the authority to enter into such a partnering arrangement since the Secretary of the Air Force had not designated any depots to enter into such arrangements nor issued implementing guidance to use in determining commercial availability. In another case, the Oklahoma City Air Logistics Center had excess capacity in its engine test cell and proposed to AFMC that it enter into a partnering agreement with Greenwich Air Services, Inc. Under the terms of the agreement, Greenwich would lease the test cell facilities for testing commercial high bypass turbofan engines. The Center believed that this arrangement would more fully use its test cell, thereby reducing excess capacity. Greenwich also viewed the arrangement as a “win-win” proposal that would defray or delay a capital investment expense and increase its product line. However, AFMC did not approve the request because the Secretary of the Air Force had not designated any depot to enter into sales arrangements nor issued implementing guidance to use in determining commercial availability. The Commander, AFMC, stated that he is neither a proponent nor opponent of partnering arrangements. However, he would consider approving such arrangements if it could be demonstrated that they would save money. He stated that his approach to cost reduction is (1) identify what is excess and divest it, (2) lease any underused capacity, and (3) then, and only if dollar savings can be demonstrated, explore partnering opportunities. In an era of reduced defense procurement, commercial contractors have become more interested in sharing repair and maintenance workloads with depots. Additionally, depots, in an effort to reduce overhead costs and retain core capabilities, are willing to enter into partnering arrangements with the private sector. A legal framework and the authority to enter into partnering arrangements exist in title 10. These authorities differ in some respects between the Army and the Air Force as do their approach to partnering. The Army has used this legislation, as well as work sharing, to initiate several partnering arrangements which, according to Army and contractor officials, have been mutually beneficial. The Air Force, on the other hand, has not initiated any partnering arrangements, citing the lack of a designation from the Secretary of the Air Force identifying which logistics centers may use the sales statutes and the legislative requirement that the good or service provided by the depot not be commercially available. The Air Force, unlike the Army, has not developed criterion to determine commercial availability, and in the absence of such criterion, has been reluctant to enter into any sales arrangements. Considering DOD’s expressed support of partnering, we recommend that the Secretary of the Air Force designate the Air Logistics Centers that may use the sales statutes and provide implementing guidance to include criteria for determining the commercial availability of goods or services provided by the centers. To develop information on the legal framework under which partnering can occur, we identified and reviewed legislation, DOD and the services’ policies and procedures, and talked to the services’ Offices of General Counsel. We surveyed the services to determine what partnering arrangements were ongoing or had been proposed at their depots, and the services’ views of such arrangements. In addition, we interviewed officials at the Office of the Secretary of Defense; Air Force Headquarters, Washington, D.C.; Army Headquarters, Washington, D.C.; the Naval Sea Systems Command, Arlington, Virginia; the Naval Air Systems Command, Patuxent River, Maryland; the Army Material Command, Alexandria, Virginia; Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; and the Army’s IOC, Rock Island, Illinois; and the Army’s program manager for Abram tanks. We also visited the Ogden Air Logistics Center, Hill Air Force Base, Utah, and the Anniston Army Depot, Anniston, Alabama. To obtain private sector views on partnering, we interviewed officials and obtained information from Lockheed Martin, Arlington, Virginia; General Dynamics Land Systems, Anniston, Alabama; United Defense Limited Partnership, Arlington, Virginia.; and United Defense Limited Partnership-Steel Products Division, Anniston, Alabama. We did not independently verify the benefits reported by the depots and the contractors; however, we did obtain documentation related to and supporting the reported figures. We conducted our review between June 1997 and February 1998 in accordance with generally accepted government auditing standards. DOD concurred with our findings and recommendation and provided a number of comments that it characterized as technical. Where appropriate, we made minor changes and clarifications in response to these comments. However, we believe that one of the comments warrants further discussion. DOD commented that the definition of partnering varies and that the Air Force has done many projects that could be considered partnering. As an example, DOD cited an agreement between Warner Robins Air Logistics Center and Lockheed Martin Corporation for repair services for the LANTIRN navigation and targeting systems. During our review, we discussed the LANTIRN project with officials from Warner Robins. It was explained that the project was to be implemented in two phases, with phase I being a firm-fixed price contract awarded to Lockheed Martin for the repair of 40 items. According to Warner Robins officials, this contract was essentially the same as any contract the Center enters into except the contractor would perform the work at Center facilities. These officials stated that phase I of the LANTIRN project does not constitute a partnering arrangement. However, under phase II of the project, if approved, Lockheed would subcontract with the Center for repair services to the LANTIRN for foreign military sales. This would be considered a partnership arrangement as defined in our report, because it constitutes the use of public sector facilities and employees to perform work or produce goods for the private sector. We are sending copies of this report to the Secretaries of Defense, the Army, the Air Force, and the Navy; the Director, Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you have any questions concerning this report, please contact me at (202) 512-8412. Major contributors to this report are listed in appendix III. Enlogex, Inc. Pulse Engineering, Inc. Ronald L. Berteotti, Assistant Director Patricia J. Nichol, Evaluator-in-Charge Oliver G. Harter, Senior Evaluator Kimberly C. Seay, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the use of partnering arrangements between the Department of Defense (DOD) and private-sector contractors to use excess capacity at military service repair depots, focusing on the: (1) legal framework under which partnering can occur; and (2) types of current partnering arrangements and the services' and industry's views of such arrangements. GAO noted that: (1) a number of statutory provisions enacted primarily during the 1990s provide, under certain conditions, the authority and framework for partnering arrangements; (2) various provisions of title 10 of the United States Code allow the services to sell articles and services outside DOD for limited purposes and under certain conditions; (3) the Army has this authority for many of its industrial facilities under section 4543 of title 10; (4) the Army controls the sales authority under this provision; (5) the authority for the remaining DOD industrial facilities, including those of the Air Force, is contained in 10 U.S.C. 2553; (6) it requires the Secretary of Defense to designate which facilities will have the authority to sell articles and services outside of DOD; (7) under both provisions, the goods or services sold must not be available commercially in the United States and providing these goods and services must not interfere with a facility's military mission; (8) due in part to these differing authorities, the extent to which the Army and the Air Force pursue partnering arrangements varies; (9) the Army has designated depots that may sell articles and services outside of DOD and has developed criteria for determining when such goods and services are not commercially available; (10) at the time of GAO's review the Army had established 13 partnering arrangements using both the sales statutes in title 10 and worksharing arrangements not requiring specific legislation; (11) Army and private-sector officials state that partnering has improved operational efficiencies at their respective facilities and that they are pursuing additional partnering opportunities; (12) the Secretary of Defense has delegated to the Secretary of the Air Force the authority to designate which facilities may sell articles and services outside of DOD; (13) however, the Air Force Secretary has not made any such designations nor developed criteria to determine whether a good or service is available from a domestic commercial source; (14) there have been several private-sector and depot proposals to enter into partnering arrangements but none have been approved; and (15) the Commander of the Air Force Materiel Command states that he is not opposed to partnering, but he is not willing to enter into such arrangements unless savings can be demonstrated.
ACPVs are non-tactical vehicles, or vehicles not used in combat operations, that can be lightly or heavily armored. The level of armoring depends on the expected threat. Both light and heavy armored vehicles provide 360 degree protection of the passenger compartment against ballistic threats, with commercial light armored vehicles providing slightly less protection than commercial heavy armored vehicles. Both variants are intended to transport American citizens and service members, as well as other passengers, in and around dangerous areas. ACPVs are extensively modified from commercially available sedans, trucks, or sport utility vehicles as they are intended to be inconspicuous and blend in with local traffic. ACPVs differ from traditional DOD military armored vehicles in various ways. First of all, traditional military armored vehicles are designed with military applications in mind, and typically the armor is integral to the design and construction. That is not the case with ACPVs, which are initially built for commercial markets and later disassembled, armored, and reassembled. Secondly, military armored vehicles are acquired through major defense acquisition programs while ACPVs are not. The guidance and regulation associated with major defense acquisition programs is generally not applicable to ACPVs. Moreover, an ACPV is considered a modified commercial item in that it is an item customarily used by the general public, except for modifications (armoring) made to meet the government’s requirements. Therefore, ACPVs are not subject to the developmental and operational testing required of major defense acquisition programs, although material and acceptance testing for functionality, armor certification, and roadworthiness is to occur. Figure 1 presents a comparative illustration between a typical ACPV and a typical military armored vehicle. To meet its need for ACPVs, DOD components can procure vehicles through a variety of means. According to officials from the DOD components in our review —the Army, Air Force, Navy, Marine Corps, and the DIA—DOD components procured more than 410 ACPVs from 2011 through 2015. Due to corroborating documentation being unavailable in a few cases, we were unable to adequately verify the exact total number of vehicles. Appendix I provides additional details on this limitation. Due to classification concerns, we do not identify procurement quantities at the individual component level. DOD and its components—Army, Navy, Air Force, Marine Corps, and Defense Intelligence Agency, the largest buyer of ACPVs in DOD—are subject to a plethora of guidance related to the procurement of ACPVs, much of which is similar—and, in most cases, identical—to that used by State. For DOD, that guidance exists at the overall federal level, the department level, and the individual component level. State follows guidance that exists at both the federal level and the department level. For both agencies, the guidance covers key aspects of ACPV acquisitions, including procurement methods, protection levels, vendor clearances, inspection and acceptance, warranties, and fleet oversight. Agency officials at State and DOD components cited the FAR as the capstone guidance for their procurement activities. At the DOD level, in 2007, the Undersecretary of Defense for Policy, issued DOD Instruction C-4500.51, DOD Commercially Procured and Leased Armored Vehicle Policy. The department delegates much of the responsibilities for ACPV procurement to the components. In addition to the FAR, State follows its own guidance, which includes the Foreign Affairs Manual and Foreign Affairs Handbook on ACPV procurement, inspection, and fleet management. Multiple methods exist for the procurement of ACPVs, including standalone contracts negotiated directly with a vendor, purchases from the GSA Multiple Award Schedule Program, interagency acquisitions, and no-cost transfers from other agencies with excess property. The four methods used for procurement are described in more detail below. Direct Contracts with Vendors: Since ACPVs are modified commercial items, agencies can utilize streamlined procedures for solicitation and evaluation, provided under the FAR. With this approach, the agency issues a request for proposals. Vendors respond with their pricing, armor certifications, delivery schedules, warranty information, and any other information required. The agency then evaluates the offerors’ proposals and makes an award. Use of General Services Administration Schedules Program: ACPVs can be procured from GSA’s Multiple Award Schedule program. This program provides federal agencies with a simplified process for obtaining commercial supplies and services at prices associated with volume buying. In these cases, the GSA has prequalified and awarded indefinite delivery/indefinite quantity contracts—contracts that provide for an indefinite quantity, within stated limits, for a fixed time—to a number of vendors, and agencies can place orders against those contracts to meet their needs. Interagency Acquisitions: An interagency acquisition takes place when an agency that needs supplies or services obtains them from another agency. The Economy Act of 1932, as implemented in the FAR, provides general authority for federal agencies to undertake interagency acquisitions when a more specific statutory authority does not exist. Interagency acquisitions under the Economy Act can save the government duplicative effort and costs when appropriately used and leverage the government’s buying power. In doing so, the acquiring agency can convey responsibility for several aspects of the procurement to a separate agency that is better poised to execute the acquisition. Excess Personal Property Transfers: In some cases, an agency may have excess inventory and can transfer ACPVs at no cost to the acquiring agency, thus avoiding the procurement process altogether and, in a sense, resulting in savings by the acquiring entity. The FAR states that agencies whose property is transferred to other agencies shall not be reimbursed for the property in any manner. DOD has outlined minimum blast and ballistic armoring requirements for protection against explosives and firearms, respectively, for ACPVs in DODI C-4500.51, but the detailed armoring specifications outlined in the instruction are classified. Generally, the specifications detail the minimum ballistic and blast protection standards that must be satisfied by all DOD ACPVs, whether they are light or heavy armored vehicles. State also has a classified policy that outlines armoring specifications for the ACPVs it procures for use in locations around the world. The FAR contains provisions for safeguarding classified information that apply to all federal agencies procuring goods and services, including DOD and State. While neither DOD nor State policies for ACPVs directly address vendor clearances, both agencies must comply with the FAR. Depending on the armoring specifications cited in the contract, a vendor supplying ACPVs to the government may require access to classified information. To accommodate such cases, Executive Order 12829 created the National Industrial Security Program, for which the Secretary of Defense is the executive agent, to safeguard classified information released to contractors. To implement the order, DOD issued the National Industrial Security Program Operating Manual to prescribe requirements, restrictions, and other safeguards necessary to prevent unauthorized disclosure of classified information and to control authorized disclosure of classified information released by executive branch departments and agencies to their contractors. The FAR requires a security requirements clause when the contract may require access to classified information. The clause requires the contractor to comply with the requirements identified in the National Industrial Security Program Operating Manual. In addition, as part of the process of obtaining a facility clearance, a contractor must sign a DOD Security Agreement, which documents the security responsibilities of both the contractor and the government in accordance with the requirements of the manual. As a part of this program, Defense Security Services within DOD administers and implements the defense portion of the National Industrial Security Program. Defense Security Services serves as the interface between the government and “cleared industry” and maintains a database of contractors that have valid, current facility clearances that allow for the safeguarding of classified material. While the July 2007 DODI C-4500.51 does not contain any specific instructions requiring ACPV inspection and acceptance procedures, it does state that DOD component heads shall ensure that the vehicles comply with armoring standards and existing acquisition regulations and specifically mentions the FAR. State’s ACPV policy is similar to the DODI with respect to inspections, but State is also required to comply with FAR. The FAR provides that agencies shall ensure that contracts include inspection and other quality requirements that are determined necessary to protect the government’s interest. The regulation goes on to state that commercial item contracts shall rely on a contractor’s existing quality assurance system as a substitute for compliance with government inspection and testing before items are provided for acceptance, unless customary market practices for the commercial item being acquired permit in-progress inspection. The FAR contains additional language that provides the contracting officer with discretion in determining the type and extent of contract quality requirements, which could include additional inspections. In particular, the FAR states that the government shall not rely on inspection by the contractor if the contracting officer determines that the government has a need to test the supplies prior to acceptance, and, in making that determination, the FAR directs the contracting officer to consider, among other things, the nature of the supplies and services being acquired, their intended uses, and the potential losses in the event of defects. Similar to the areas outlined above, the DODI C-4500.51does not contain any specific language requiring warranties for ACPV procurements, but it states the vehicles shall be procured in accordance with the FAR. Likewise, State’s armored vehicle policy does not include specific references to warranties. However, State is bound by the FAR. The FAR states that the use of warranties is not mandatory. However, the FAR sets forth criteria that contracting officers shall consider when deciding whether a warranty is appropriate. These factors include, but are not limited to, complexity and function, the item’s end use, difficulty of detecting defects before acceptance, and potential harm to the government if the item is defective. The FAR also offers suggested terms and conditions that contracting officers may incorporate into contracts. For example, in the event defects are discovered, the government may obtain an equitable adjustment of the contract or direct the contractor to repair or replace the defective item at the contractor’s expense. The DODI C-4500.51 outlines a number of responsibilities for different DOD officials that relate to ACPV fleet management and, ultimately, oversight. In particular, the instruction establishes that an assistant secretary within the Under Secretary of Defense, Policy, shall be the principal individual responsible for collecting and reporting information specific to DOD’s ACPV fleet. Part of that reporting includes providing ACPV-related information to Congress. State policy includes similar provisions for ACPV management and oversight. While DOD and the components have developed policies and procedures for managing their non-tactical vehicle fleets, the language contained in those instructions often defers to DODI C-4500.51 for specific ACPV guidance. Table 1 identifies the different component-level policies that exist for ACPVs, a brief description, and whether there is a particular office within the component for ACPV-related matters. Similar to the instructions and manuals used by the DOD components, State’s Foreign Affairs Manual outlines roles and responsibilities for its armored vehicle program. Other policies and procedures are incorporated by reference in these manuals for items such as armoring standards, vehicle procurement, assignments (i.e., locations), maintenance, and disposal. This guidance also assigns a single State entity—the Bureau of Diplomatic Security—as having overarching responsibility for the armored vehicle program. Selected DOD components in our review complied with guidance for the procurement and inspection of ACPVs for the contracts we reviewed. Further, we found evidence of in-progress inspections of DOD’s ACPVs, although the Army conducted such inspections for only a single contract action. DOD utilized the four procurement methods described above for acquiring the vehicles, all of which are allowable under the FAR. The blast and ballistic armoring standards referenced in the contract actions we reviewed satisfy the levels of protection required under DODI C- 4500.51. For classified contract actions, vendor security clearances were requested and verified. All the contract actions reviewed had similar warranty provisions and generally reflected what is stated in the FAR. We found no evidence of contracts for correcting armoring deficiencies after delivery. The contracts we reviewed generally included FAR-based language for inspections and acceptance and in-progress inspections. Further, due to implementation of Office of the Secretary of Defense (OSD) efficiency initiatives, DOD components no longer report ACPV information to the OSD, as required by DODI C-4500.51. Moreover, the Army has no central office with complete oversight of contracting and fleet management activities or that maintains all relevant ACPV-specific information. In accordance with allowable FAR provisions, DOD components in our review utilized four procurement methods to acquire ACPVs between 2011 and 2015. Specifically, the components used direct contracts with vendors, GSA multiple award schedules, interagency acquisitions, and excess personal property transfers to acquire the vehicles. According to DOD officials, DOD components consider multiple factors in deciding how to procure ACPVs and meet armoring requirements, including the quantity of ACPVs needed, the components’ expertise in procuring the vehicles, the components’ technical specifications, and the urgency of the requirement. Table 2 presents the DOD components included in our review and the four methods they used to procure ACPVs. The Army and DIA awarded contracts directly to vendors, as well as placing orders under GSA’s Multiple Award Schedule Program. According to DOD officials, one DOD component contracted with a vendor who subcontracted the armoring work; in this type of arrangement, the subcontractor is generally referred to as a third- party armorer. The Navy and Marine Corps used interagency acquisitions pursuant to the Economy Act whereby State ordered ACPVs on their behalf using State contract vehicles. Marine Corps and Navy officials stated that, by doing so, they abdicated all procurement responsibilities to State. This approach also allowed these components to leverage State’s volume purchasing power, which, according to a Navy official, resulted in cost savings for ACPVs. DIA received some ACPVs as transfers from State’s and another agency’s excess property. State officials stated that Marine Corps may also have received some ACPVs as excess property from State’s inventory but were unable to provide corresponding documentation. We saw no evidence of fund transfers as the ACPVs were transferred free of charge to DIA, in compliance with the FAR. While the contract actions we reviewed generally did not explicitly reference the DODI C-4500.51 armoring specifications, they did reference other standards that were similar in most respects to those specifications, which allowed them to avoid creating a classified contract. These included standards from State, the North Atlantic Treaty Organization Standardization Agency, and the European Committee for Standardization. These standards are similar to the DODI armoring specifications in many respects, but the North Atlantic Treaty Organization standards and the European Standards are unclassified. The three armoring specifications that were most frequently referenced in the contract actions we reviewed included State standards, North Atlantic Treaty Organization standards, and European standards. In cases where the contract documentation referred to standards that did not satisfy the minimum armoring specifications outlined in the DODI C- 4500.51, there was supplemental language in the contract that compensated for the differences. The North Atlantic Treaty Organization standards contained ballistic and blast specifications similar to the DODI C-4500.51, while the European standards cover only ballistic armoring specifications. Any additional details regarding the differences between the standards are classified. DOD is currently updating its criteria with regards to armoring standards pursuant to findings and proposed steps contained in an August 2015 DOD report on ACPVs. The DOD report stated that the department should regularly review and update armoring specifications. The department cancelled DOD Instruction C-4500.51 in May 2017 because, according to an OSD official, the Undersecretary of Defense for Acquisition, Technology and Logistics did not want the responsibility for determining the new armoring requirements. Anticipating the cancellation of DODI C-4500.51, the department issued a separate instruction. This instruction, DODI O-2000.16 Volume 1, dated November 2016, gave DIA responsibility for developing minimum standard inspection criteria for ACPVs. DIA is also responsible for disseminating specifications for the acquisition or modification of ACPVs and overseeing their incorporation into contracts awarded by DOD components. DIA officials said the criteria have been developed, but the agency is still determining how they will be distributed to the components. Also, DIA has not yet established a process for ensuring components incorporate those criteria in their contracts. According to DIA officials, implementing a process for oversight may be challenging for their agency. As of April 2017, DIA had not yet determined how long it would take to complete these actions. The majority of DOD components’ contract actions we reviewed were unclassified, and, in those cases, no security clearance information was required or requested. For the unclassified contract actions, the contractors never required access to any classified information and the components did not require security clearances. This included the contract actions with the third party armorers—neither the prime nor subcontractors required any classified information, so there was neither a need nor a request for security clearances. Some of the contract actions we reviewed were classified because they required armoring in accordance with State standards, which are classified, while other contract actions cited to alternative standards and, therefore, were unclassified. Specifically, when the Army required a security clearance, the vendor provided evidence of its facility clearance with its proposal. For the Navy and Marine Corps, their ACPVs were procured via interagency acquisition using State contracts, which were all classified, as they required armoring to the classified State standards. In these cases, State officials told us that their Industrial Security Division performs an initial check of whether prospective vendors possess the required security clearance and provides results to the contracting office. According to officials, at contract award the Industrial Security Division issues a final, signed classification specification form to document that the selected vendor’s clearance is in accordance with the requirements of the contract. In these cases, we found evidence that State took steps to ensure vendors were properly vetted and cleared, including obtaining signed classification specification forms. State and all selected DOD components, with the exception of the Army, provided evidence of in-progress inspections for each contract action used to procure ACPVs between 2011 and 2015. All contract actions reviewed included provisions for inspections and acceptance, including in-progress inspections. According to DIA officials, conducting in-progress inspections of their ACPVs is a best practice and a key step to ensuring vehicle quality and safety. We reviewed documentation for each DIA contract action and found evidence of in-progress inspections for all of them. Such evidence included detailed trip summary reports that documented multiple aspects of in-progress inspections at vendor armoring facilities. The in-progress inspection trip reports identified problems early that could be corrected before another in-progress inspection or the final inspection; deficiencies were dealt with before delivery and acceptance of the ACPVs. The trip reports contained detailed narratives listing the inspection dates, manufacturing facilities, inspection attendees, pictures of the vehicles, and any problems and corrections. The reports contained thorough trip narratives detailing the ACPVs’ performance and road tests, any problems with the ACPVs, how problems were corrected from an earlier in-progress inspection, and any action items or follow-up for the contractor. We received evidence of State conducting in-progress inspections on ACPVs procured on behalf of the Navy through interagency acquisitions. As with DIA, State considers in-progress inspections to be a best practice when procuring ACPVs. Those inspections were similar to DIA’s. Specifically, the contract files contained checklists for in-progress inspections of opaque armor, transparent armor, and roadworthiness, as well as vehicle components such as the engine, exterior, interior, operation/control, and special equipment/options. The files also contained evidence of final inspection armoring checklists completed by State personnel. State personnel inspected the vehicle’s chassis, glass, serviceability, appearance, and roadworthiness. Based on our review of in-progress inspections conducted by State, there were issues with vehicles ranging from problems with adhesive or fenders to a need to reseal transparent armor. Lastly, there was evidence of final acceptance, indicating that any issues discovered in inspections were addressed, with both State officials and Navy officials accepting the ACPVs under these interagency acquisitions. According to a Marine Corps official, they deferred to State to conduct in- progress inspections of Marine Corps’ ACPVs procured through interagency acquisitions. Marine Corps identified the State contracts that were utilized to procure their ACPVs and State provided evidence of final and in-progress inspections and acceptance for vehicles procured under those contracts. However, GAO could not confirm that those inspection records correlated to the Marine Corps’ ACPVs in every case. The inspection records referenced vehicle identification numbers that linked to State’s contracts and task orders, but neither State nor the Marine Corps were able to provide all the task orders required to corroborate these purchases. While this demonstrated that inspections were conducted for vehicles procured under these contracts, it did not allow verification that all the Marine Corps’ ACPV orders were placed under those contracts. Army contract actions contained language and clauses for in-progress inspections as well as final inspections and acceptance, and Army officials provided evidence of final inspections and acceptance of ACPVs procured between 2011 and 2015. However, Army officials conducted in- progress inspections for a single procurement in 2011. Although the remaining Army contract actions included clauses that allowed such inspections, the Army instead depended on the vendors’ certified quality control and inspection processes to ensure the vehicles were manufactured to specifications. Army officials acknowledged they did not conduct in-progress inspections for any other ACPVs procured between 2011 and 2015, but maintained that they had visited all the armoring facilities in the past under other contracts prior to the period of our review. However, the Army’s lack of in-progress inspections results in the service relying on the vendor’s quality control processes and therefore a presumption of quality for those vehicles produced without component- level, firsthand verification of armoring processes and safety. As we noted earlier, both State and DIA found problems during their in-progress inspections that may not have been discovered otherwise. As a result, there is the risk that Army ACPVs may be placed into service with undetected defects. As mentioned above, DOD is updating its ACPV criteria. These updated criteria are expected to include minimum specifications for inspections pursuant to findings and proposed steps contained in DOD’s August 2015 report on ACPVs to the House Armed Services Committee. According to the report, the minimum inspection criteria will include various stages of inspections, including in-progress inspection. Although this is a positive step, these changes have not yet been approved, promulgated to the components, and implemented, nor is there a mechanism in place to ensure the criteria are being consistently applied and executed across the components. Until these criteria are approved and implemented, the risk of vehicles deploying with defects remains. While the FAR provides that contracts for commercial items shall generally rely on the contractor’s existing quality assurance system as a substitute for government inspection, the regulation also provides the contracting officer with discretion to conduct in-progress inspections when deemed appropriate. Specifically, the FAR directs the contracting officer to consider the nature of the supplies and services being acquired and the potential losses in the event of defects. Both DIA and State determined that in-progress inspections of ACPVs are warranted, as the intended use of these vehicles is to transport American citizens and service members through dangerous areas, and failures stemming from armoring deficiencies could endanger passengers. In addition, officials from both DIA and State consider in-progress inspections imperative and a best practice towards ensuring their ACPVs are armored in a manner that improves the likelihood that vehicles meet contractual specifications. DIA in-progress inspections discovered vehicle deficiencies that required corrective actions. These inspections are above and beyond the quality control procedures provided by the vendors. They serve as safeguards and provide greater confidence that ACPVs are being built in a manner that satisfies minimum armoring specifications and that the ACPVs are protecting the lives of the people who rely on them in potentially dangerous situations. The nature of the armoring process itself suggests in-progress inspections are important. The armoring process involves disassembling the commercial vehicles, integrating the armor, and then rebuilding the vehicles, which essentially conceals evidence of the armoring techniques. As a result, any defects that are not discovered during the armoring process may not be noticeable during the government’s final inspection and acceptance event. Given the intended use of these vehicles to transport American citizens and service members as well as other passengers that are considered high-value targets through dangerous areas, further inspection of ACPVs is an important step in the quality assurance system. All contract actions we reviewed had some form of warranty provision. Most contract actions we reviewed had a 1- to 3-year warranty range for opaque armor (i.e., steel). All contract actions had a 2-year warranty for transparent armor (i.e., glass) and coverage at the ACPV fielded location with no cost to government. All DIA contract actions also had 2-year warranties for workmanship. The FAR has no mandatory policy requiring warranties, but it does direct contracting officers to consider several factors when determining whether a warranty is appropriate for an acquisition. DOD officials stated that any problems with the ACPVs were minor, such as window noise. These problems were documented and corrected in the inspection phase before final acceptance by the government. Officials from the components stated that their ACPVs did not have any catastrophic failures during testing or in the field. Further, we found no evidence of contract actions for correcting armoring deficiencies after delivery. Office of Secretary of Defense Principal Staff Assistants and DoD Component Heads, in coordination with the Director, Administration and Management and General Counsel of the Defense Department, will eliminate all non-essential, internally generated reports, including any and all reports generated with a commissioning date prior to 2006. The Director, Administration and Management shall publish guidance regarding use of, cost benefit analysis of, and establishing sunset provisions for, report requirements. While OSD eliminated this reporting, as mentioned above, there is a requirement in DODI O-2000.16 Volume 1 for DIA to oversee incorporation of armoring and inspection criteria in all components’ contracts. DOD officials stated this requirement will require some coordination among the components. DIA officials said the agency does not currently have a mechanism for such oversight and that establishing such a mechanism could be challenging. This situation puts a premium on coordination between DIA and the services and increases the importance of services being able to provide procurement and inspection information to DIA. With the exception of the Army, all the DOD components we reviewed have a central point of contact and mechanisms for managing and organizing their ACPV information. According to an Army official, while the Army’s program office for non-tactical vehicles can track Army-wide vehicle condition for replacement decisions, that office does not maintain more comprehensive ACPV information, such as information for contract execution and vehicle inspections, across the entire Army. This decentralized approach for ACPV management leaves the Army with an incomplete picture of various ACPV-related matters, including consistency of procurement and inspection methods. For example, since the Army does not have ACPV information in a centralized manner, it may be difficult for the Army to provide information on the types of contracts used for procuring these vehicles and whether in-progress inspections are being conducted, to DIA for oversight. It could also present challenges to the Army for consistent application of best practices and lessons learned between the purchasing entities, as well as difficulty leveraging contracting mechanisms to obtain the best value for the government. Federal standards for internal control call for mechanisms that allow for oversight intended to help an organization, such as the Army, meet objectives and manage risks for activities such as ACPV procurement. The internal control standards advocate for an oversight structure to fulfill responsibilities set forth in laws and regulations and for control activities at various levels to help meet objectives and manage risks. Such control activities would include management reviews to compare actual performance to planned or expected results throughout the organization. Further, internal controls advocate for reports for use by the organization to ensure compliance with internal objectives, evaluate compliance with laws and regulations, and inform outside stakeholders. While selected DOD components in our review are complying with guidance, policies, and procedures for ensuring the safety and quality of ACPVs, opportunities exist for the Army to provide greater assurances that vehicles meet armoring and quality specifications. DOD’s use of ACPVs to transport personnel through areas that are understood to have potential for attack increases the importance of in-progress inspections and oversight. Such inspections provide greater assurances that vendors are adhering to established quality assurance procedures and delivering vehicles that satisfy the armoring standards for protecting passengers. By DIA’s own admission, overseeing the implementation of revised armoring and inspection standards in DOD contracts will be a challenge. A focal point within each of the DOD components that can collect and report ACPV-related contracting information to DIA could help ease that burden. While many components have a single, centralized office that is responsible for all aspects of ACPV that would be capable of reporting this information, the Army’s non-tactical vehicle office does not maintain similar information. In that regard, the Army could benefit from a centralized point of contact that can collect and, ultimately, report to DIA information pertaining to all aspects of the component’s ACPV safety, procurements, and fleet status. To help ensure that ACPV armoring and quality standards are met, that evolving department and component policies are consistent, and that they are consistently applied, we recommend that the Secretary of Defense: Until the department approves and implements the updated armoring and inspection standards, direct the Secretary of the Army to conduct in-progress inspections at the armoring vendor’s facility for each procurement; and Direct the Secretary of the Army to designate a central point of contact for collecting and reporting ACPV information to facilitate DIA’s oversight of armoring and inspection standards in these contracts. We provided drafts of this product to the Department of Defense (DOD) and the State Department for comment. In its comments, reproduced in appendix II, DOD concurred with our recommendations. DOD also provided a technical comment, which we incorporated as appropriate. As we made no recommendations to the State Department, it did not provide comments. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Director of the Defense Intelligence Agency; and the Secretary of State. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report addresses DOD components procurement of Armored Commercial Passenger-Carrying Vehicles (ACPVs). The objectives are to determine (1) DOD’s guidance and procedures for acquiring ACPVs and how they compare with those at State; and (2) the extent to which selected DOD components adhere to guidance, policy, and procedures for ensuring the safety and quality of ACPVs. To assess DOD’s guidance and procedures for acquiring ACPVs and how they compare with those at State, we reviewed the DOD instruction for specific guidance pertaining to ACPV acquisitions and the department- level policies for procuring modified commercial vehicles. We also reviewed the associated federal acquisition regulations that pertain to the various aspects of our review, namely those for procurement mechanisms, warranties, security clearances, inspection, and acceptance. We identified service-specific guidance that could also apply to the acquisition and inspection of ACPVs and interviewed DOD service and agency officials to verify their applicability to ACPV procurement. We researched the State Foreign Affairs Manual and Foreign Affairs Handbook for specific sections dealing with various aspects of ACPV procurement and inspection and verified their applicability during meetings with State officials. We summarized the contents of DOD and State policies for comparative purposes. We also analyzed armoring standards that were referenced in contract file documents—which included State standards, North Atlantic Treaty Organization standards, and European standards—and compared them with the minimum armoring standards outlined in DOD Instruction C-4500.51, the relevant instruction for the timeframe we assessed. The specific armoring standards contained in the DOD Instruction and State policy are classified, which preclude us from presenting a detailed assessment of those standards in this report. To determine the extent to which selected DOD components—namely the Army, Navy, Marine Corps, and DIA, the largest procurer of these vehicles for use overseas—adhered to guidance, policy, and procedures for ensuring the safety and quality of armored commercial passenger- carrying vehicles, we worked with DOD and State officials to identify contract actions that were used to acquire ACPVs that DOD components received between 2011 and 2015. We selected this time frame to cover from when DOD stopped reporting this information to Congress to the most recently available information at the time of our review. For each contract action, we reviewed numerous documents, including base contracts, task orders, work statements, vendor proposals, invoices, and inspection reports, in order to identify evidence of contracting mechanisms, armoring specifications, vendor clearances, inspection and acceptance, and fleet management. We also created data collection instruments, populated them with the information obtained during the course of our review, verified the information with agency officials through multiple interviews, and created summary analyses that allowed us to succinctly present the information in our report. We searched the federal procurement database in order to identify any instances where separate contracts were executed to correct any deficiencies that were discovered after vehicles were fielded. We were unable to identify any such contracts. In order to determine the total quantities of ACPVs that selected DOD components purchased between 2011 and 2015, we sent questionnaires to agency officials asking specifically about procurement quantities. We also reviewed contract file documentation that pertained to quantities obtained over that time frame and summarized the results. Although we calculated a quantity for ACPVs that DOD components procured from 2011 to 2015, the State information for vehicles it provided to DOD was inconsistent with information provided by all the services. As a result, we were unable to verify the exact total number of vehicles DOD components acquired over this time frame. For example, State and Marine Corps officials both reported vehicle quantities and contract numbers, but they were unable to provide task orders to validate those quantities. We conducted this performance audit from May 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Marie A. Mak, (202) 512-4841, makm@gao.gov. In addition to the contact named above, J. Kristopher Keener, Assistant Director; Emily Bond; Thomas M. Costa; Andrea C. Evans; Marcus C. Ferguson; Kristine R. Hassinger; and Hai V. Tran made key contributions to this report.
DOD uses armored military vehicles for combat and operational support, but it also uses armored commercial vehicles to transport military and civilian personnel in areas that pose a threat to their safety. These vehicles differ in many ways, including mission and appearance. The House Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2017 contained a provision for GAO to assess multiple aspects of DOD's procurement practices for ACPVs. This report assesses (1) DOD's guidance and procedures for acquiring ACPVs and how they compare with those at the Department of State; and (2) the extent to which selected DOD components adhere to guidance and procedures for ensuring the safety and quality of ACPVs. To conduct this work, GAO analyzed policies, procedures, and regulations that govern aspects of acquiring, armoring, inspecting, and managing ACPVs; interviewed DOD and State Department officials; and compared armoring standards DOD components—Army, Navy, Marine Corps, and Defense Intelligence Agency—use for ACPVs against minimally acceptable protection standards. GAO reviewed contract actions for selected DOD components between 2011 and 2015. The Department of Defense (DOD) and the defense components in GAO's review—Army, Navy, Air Force, Marine Corps, and Defense Intelligence Agency, the largest buyer of armored commercial passenger-carrying vehicles (ACPV) in DOD—have a plethora of guidance related to ACPV procurement. This guidance is similar to that used by the Department of State, which also procures a large number of these vehicles (see figure). DOD officials GAO spoke with cited the Federal Acquisition Regulation as the capstone guidance for procurement activities. For DOD, guidance also exists department-wide and at the individual component levels. Guidance covers numerous aspects of ACPV acquisitions, including procurement methods, protection levels, inspection and acceptance, warranties, and oversight. ACPV-related contract actions for the selected DOD components generally complied with guidance, policies, and procedures for ensuring the safety and quality of ACPVs and included contract language that met minimum armoring standards. However, opportunities exist for the Army to improve its processes for in-progress inspections—inspections that occur as the vehicle is being armored—as the Army instead depended primarily on the vendors' quality control processes. GAO's review of contract actions used to procure ACPVs for selected DOD components between 2011 and 2015 showed that in-progress inspections were conducted, with the exception of the Army, which conducted such inspections for only a single contract action. Without in-progress inspections, the Army is accepting risk in the safety of its vehicles. Further, with the exception of the Army, all the DOD components have a central office and mechanisms for reporting ACPV information. This decentralized approach leaves the Army with an incomplete picture of various ACPV-related matters, including procurement and inspection methods. Federal standards for internal control call for mechanisms that allow for oversight intended to help an organization, such as the Army, ensure compliance with armoring and inspection standards. Without a designated central point of contact, the Army may face challenges for reporting ACPV information to DOD officials responsible for overseeing the implementation of armoring and inspection standards department-wide. The Secretary of Defense should require the Army to conduct in-progress inspections and designate a central point of contact for ACPV information. DOD concurred with the recommendations.
State is the lead agency for the conduct of American diplomacy, and its foreign affairs activities seek to promote and protect the interests of American citizens. State requires that Foreign Service officers assigned to certain positions worldwide meet a specified level of proficiency in the language or languages of the host country. As of October 31, 2008, State had about 3,600 positions worldwide that required language proficiency and 530 positions where such proficiency was preferred but not required (language-preferred positions). (See table 1.) State categorizes these languages as “world” (for example, Spanish or French), “hard” (for example, Urdu), or “superhard” (for example, Arabic or Chinese) based on the time it generally takes individuals to learn them. State has also defined its need for staff proficient in some languages as “supercritical” or “critical,” based on criteria such as the difficulty of the language and the number of language-designated positions in that language, particularly at hard-to-staff posts. About 970, or 27 percent of, language-designated positions are for supercritical or critical needs languages. State uses the foreign language proficiency scale established by the federal Interagency Language Roundtable (ILR) to rank an individual’s language skills. The scale has six levels, from 0 to 5—with 5 being the most proficient—to assess an individual’s ability to speak, read, listen, and write in another language. State sets proficiency requirements only for speaking and reading, and these requirements tend to congregate at proficiency levels 2 and 3. Table 2 shows the language skill requirements for each proficiency level. The difference between the second and the third proficiency levels—the ability to interact effectively with native speakers—is significant in terms of training costs and productivity. For example, State provides about 44 weeks of training to bring a new speaker of a so-called superhard language such as Arabic up to the second level. Moving to level-3 proficiency usually requires another 44 weeks of training, which is generally conducted at field schools overseas. State faces notable shortfalls in meeting its foreign language requirements for overseas language-designated positions. Overall, 31 percent of Foreign Service generalists and specialists in language-designated positions worldwide did not meet the speaking and reading proficiency requirements of their positions as of October 31, 2008. While the extent of these shortfalls varies, they are found in all regions, in all languages, and in all types of positions. These shortfalls may have adverse impacts on security, public diplomacy, consular operations, economic and political affairs, and other aspects of U.S. diplomacy. As of October 2008, 31 percent of Foreign Service generalists and specialists in language-designated positions worldwide did not meet both of the speaking and reading proficiency requirements of their positions, up from 29 percent in 2005. The percentage decreases to 25 percent if officers who meet at least one of the requirements are included. Overall, 1,005 officers in language-designated positions did not meet both of the requirements of their positions, and an additional 334 language-designated positions were vacant (see fig. 1). The persistence of these shortfalls is partially attributable to an overall increase of 332 overseas language- designated positions between 2005 and 2008, many of which are in hard and superhard languages. At the same time, State increased the overall number of language-proficient officers who meet the requirements for their positions by about 240 officers between 2005 and 2008. State reports annually to Congress on foreign language proficiency in the department; however, its methodology for calculating the percentage of officers who meet the requirements is potentially misleading and overstates the actual language proficiency of FSOs in language-designated positions. For example, State has reported that over 80 percent of employees assigned to vacant language-designated positions met or exceeded the proficiency requirement in each year since fiscal year 2005. According to HR officials responsible for compiling and analyzing these data, however, this figure is not the percentage of officers currently in language-designated positions who have tested scores at or above the requirements for the position; rather, it measures the percentage of officers assigned to language-designated positions who are enrolled in language training, regardless of the outcome of that training. Because several officers do not complete the entire training, while others do not achieve the level of proficiency required even after taking the training, the actual percentage of officers meeting the requirements for their positions is likely lower. While the extent of language deficiencies varies from post to post, some of the greatest deficiencies exist in regions of strategic interest to the United States (see fig. 2). For example, about 40 percent of officers in language- designated positions in the Middle East and South and Central Asia did not meet the requirements for their positions. Further, 57 percent (or 8 officers) and 73 percent (or 33 officers) of officers in Iraq and Afghanistan, respectively, did not meet the requirements for their positions. Other missions with notable gaps include Pakistan (45 percent/5 officers), Egypt (43 percent/13 officers), India (43 percent/12 officers), and Saudi Arabia (38 percent/12 officers). Despite State’s recent efforts to recruit individuals with proficiency in supercritical and critical languages, and some improvement in filling language-designated positions in certain critical languages since 2005, the department continues to experience notable gaps in these languages (see fig. 3). In 2008, 73 more positions in supercritical needs languages were filled by officers meeting the requirements than in 2005. However, 39 percent of officers assigned to LDPs in supercritical languages still do not meet the requirements for their positions, compared with 26 percent in critical languages and 30 percent in all other languages. Specifically, 43 percent of officers in Arabic language-designated positions do not meet the requirements of their positions (107 officers in 248 filled positions), nor do 66 percent of officers in Dari positions (21 officers in 32 positions), 38 percent in Farsi (5 officers in 13 positions), or 50 percent in Urdu (5 officers in 10 positions). Shortfalls vary by position type. Foreign Service specialists—staff who perform security, technical, and other support functions—are less likely to meet the language requirements of their position than Foreign Service generalists. More than half of the 739 specialists in language-designated positions do not meet the requirements, compared with 24 percent of the 2,526 generalists. For example, 53 percent of regional security officers do not speak and read at the level required by their positions. According to officials in Diplomatic Security, language training for security officers is often cut short because many ambassadors are unwilling to leave security positions vacant. Further, among Foreign Service generalists, 58 percent of officers in management positions do not meet the language requirements, compared with 16 percent of officers in consular positions and 23 percent of officers in public diplomacy positions. When posts are unable to fill language-designated positions with langu qualified officers, they must decide whether to request a language waiver and staff the position with an officer who does not meet the language requirements or to leave the position unstaffed until an officer with the requisite skills is available. In some cases, a post chooses to leave a language-designated position vacant for a period of time while an officer i getting language training. In other cases, when a post has requested repeated language waivers for a specific position, it may request that the language requirement be eliminated for the position. According to St 2008 the department granted 282 such waivers—covering about 8 percent of all language-designated positions—down from 354 in 2006. State granted a disproportionate number of waivers for South and Central Asia, where the language requirement for about 18 percent of the region’s 206 language-designated positions was waived in 2008, compared with 5 percent in both East Asia and the Western Hemisphere. Our fieldwork for this report, in addition to past reports by GAO, State’s Office of the Inspector General, the National Research Council, the Department of Defense, and various think tanks, has indicated that fore language shortfalls could be negatively affecting several aspects of U.S . diplomacy, including consular operations, security, public diplomacy, economic and political affairs, the development of relationships with foreign counterparts and audiences, and staff morale. It is sometimes difficult to link foreign language shortfalls to a specific negative outc or event, and senior officials at State have noted that language shortfalls neither prevent officers from doing their jobs nor have catastrophic consequences. However, these officials acknowledged that the cumulative effects of these gaps do present a problem, and the department has not assessed their impact on the conduct of foreign policy. Table 3 presents some examples of such impacts from our reports, and reports by State’s Inspector General, the National Research C current fieldwork, previous GAO ouncil, and the Department of Defense. Officials at one high-fraud visa because of a lack of language skills, they make decisions based on what they “hope” they have heard and, as a result, may be incorrectly adjudicating visa decisions. post stated that, because of language skill deficiencies, consular officers sometimes adjudicate visas without fully understanding everything visa applicants tell them during visa interviews (2006). State’s Inspector General found that the ability of consular officers in at least two Arabic-speaking posts to conduct in-depth interviews necessary for homeland security is limited (2005). A consular officer in Istanbul proficient in Turkish said she has seen cases where adjudicating officers have refused visa applications because they did not fully understand the applicant. State’s Inspector General found that insufficient Chinese language skills were a serious weakness in the U.S. Mission to China’s consular operations (2004). A security officer in Istanbul said that inability to speak the local language hinders one’s ability to get embedded in the society and develop personal relationships, which limits officers’ effectiveness. According to one regional security officer, the lack of foreign language skills may hinder intelligence gathering because local informants are reluctant to speak through locally hired interpreters (2006). A study commissioned by the Department of Defense concluded that gaps in governmentwide language capabilities have undermined cross-cultural communication and threatened national security (2005). A security officer in Cairo said that without language skills, officers do not have any “juice”—that is, the ability to influence people they are trying to elicit information from. An officer at a post of strategic interest said because she did not speak the language, she had transferred a sensitive telephone call from a local informant to a local employee, which could have compromised the informant’s identity. A public affairs officer in one post we visited said that the local media does not always translate embassy statements accurately, complicating efforts to communicate with audiences in the host country. For example, he said the local press translated a statement by the ambassador in a more pejorative sense than was intended, which damaged the ambassador’s reputation and took several weeks to correct. According to an information officer in Cairo, the embassy did not have enough Arabic- speaking staff to engage the Egyptian media effectively (2006). Foreign officials we met with noted that speaking the host country’s language demonstrates respect for its people and culture; thus fluency in the local language is important for effectively conducting public diplomacy (2003). In Shenyang, a Chinese city close to the border with North Korea, the consul general told us that reporting about issues along the border had suffered because of language shortfalls. In Tunis, officers told us that Arabic- speaking staff sometimes work outside of their portfolio to cover for colleagues without Arabic skills, which places a larger burden on officers with language skills. An economics officer at one post said that months-long negotiations with foreign government officials were making little progress until American officers began speaking the host country language and a local official who did not speak English could convey valuable information (2006). In Vladivostok, State’s Inspector General reported that lack of proficiency in Russian limited the political/economic officer’s reporting (2007). The U.S. ambassador to Egypt said that officers who do not have language skills cannot reach out to broader, deeper audiences and gain insight into the country. Other officials in Cairo noted that the officers in Egypt who do not speak the language tend to inherit the contacts of their predecessor, leading to a perpetually limited pool of contacts. In Afghanistan, State’s Inspector General reported that less than one-third of political and economic officers were proficient in a national language, which has led to difficulties in establishing and maintaining relationships with Afghan contacts (2006). In China, officials told us that the officers in China with insufficient language skills get only half the story on issues of interest, as they receive only the official party line and are unable to communicate with researchers and academics, many of whom do not speak English. The Inspector General has also reported that in Lebanon, political, economic, and public diplomacy officers went to post without sufficient language skills, limiting their efforts to expand their contacts among audiences that do not speak English (2005). The deputy chief of mission in Ankara said that officers who do not have sufficient Turkish skills are reading English-language newspapers rather than what Turks are reading, further limiting their insight into what is happening in the country. Several officers noted that life in Turkey without any Turkish language skills is very inhibiting, particularly for family members who are out in the city every day. The head of the Political/Economic Section in Shenyang said that families are very isolated without Chinese language skills. State’s Inspector General found the lack of Russian language skills inhibits social interaction by many new arrivals in Moscow and by some other community members, many of whom rarely venture out of the embassy compound (2007). Furthermore, as a result of these language shortfalls, officers must rely on their locally engaged staff to translate for them. Officers at each post we visited said that they frequently take local staff with them to meetings to help translate. For example, a security officer in Cairo said that this tendency makes him feel irrelevant in meetings he should be leading. In Tunis, some officers said that they must use local staff to translate meetings outside of the embassy, but some contacts are reluctant to speak freely in front of other Tunisians. In addition, State’s Inspector General has noted that sections in several embassies rely on local staff to translate, monitor the local media, and judge what the section needs to know. The Inspector General also noted problems with this tendency, as overreliance on local translators can make conversations less productive and imposes a significant overhead cost that adequate language training could reduce. Furthermore, in its 2004 inspection of the U.S. embassy in Seoul, the Inspector General found that visa adjudications may be based on incorrect information if a consular officer who does not understand basic Korean must rely on translations from locally engaged staff. State’s efforts to meet its foreign language requirements include an annual review process to determine the number of language-designated positions, providing language training, recruiting staff with skills in certain languages, and offering pay incentives to officers to continue learning and maintaining language skills. However, several challenges—such as staffing shortages, the recent increase in language-designated positions, and perceptions about the value of language training in State’s promotion system—limit State’s ability to meet these requirements. State determines its foreign language requirements through an annual review process that results in incremental changes but does not necessarily reflect posts’ actual needs. Every year, HR directs posts to review all language-designated positions and to submit requests for any changes in the number of positions or level of proficiency. Headquarters officials from HR, FSI, and the regional bureaus then review and discuss these requests and develop a list of positions identified as requiring foreign language skills. However, the views expressed by officials from HR and FSI, and FSOs at overseas posts during our meetings with these officials, and our findings in previous work on this issue, suggest that State’s designated language proficiency requirements do not necessarily reflect the actual language needs of the posts. State’s current instructions to the posts suggest the language designation review be tempered by budgetary and staffing realities. Consequently, some overseas posts tend to request only the positions they think they will receive. For example, a senior official at one of the overseas posts we visited said that although he would like several positions at the 4/4 proficiency level in his section, he knows the positions will not be designated at that level, so he does not request them. A senior official at another post we visited said he does not request language-designated positions at a higher proficiency level because he knows that ultimately the post will not get enough applicants for the positions. This view was echoed by HR officials who stated that overseas posts must often weigh the desire to attract a large number of applicants against a desire to draw bidders with a higher level of language proficiency. The public affairs officer at one of the overseas posts we visited said he tried to have some language-designated positions in his section downgraded to language-preferred because he had a hard time filling them. Further, HR officials told us that State should conduct a more thorough assessment of language requirements regardless of resource requirements. Concerns about the process have been a long-standing issue at State. A 1986 State report noted that the language designation system needed to be overhauled on a worldwide basis and recommended that posts carefully review their language-designated positions with the geographic bureaus, eliminating positions that seem unnecessary, adding more if required, deciding how many positions at the 4 proficiency level are needed, and defining what kind of fluency each language-designated position requires. For example, one senior official said there should be a systematic review of which positions need language proficiency and which do not, and then the department should decide whether it gives some language training to a lot of people or extensive language training to a select few. Moreover, officers at the posts we visited questioned the validity of the relatively low proficiency level required for certain positions, citing the need for a higher proficiency level. Officials at most of the posts we visited said that a 3/3 in certain critical languages is not always enough for officers to do their jobs, although they acknowledged the difficulty State would have filling positions at a higher proficiency level. For example, an economics officer at one of the posts we visited said that she could start meetings and read the newspaper with her 3/3 in Arabic, but that level of proficiency did not provide her with language skills needed to discuss technical issues, and the officers in the public affairs section of the same post said that a 3/3 was not sufficient to effectively explain U.S. positions in the local media. Officers in the public affairs section of another post we visited said that they were not comfortable making statements on U.S. foreign policy with a 3/3 proficiency level. Senior officials at a third post said 3/3 is adequate to ask and answer questions but not to conduct business. An officer with a 4/4 in Chinese said officers in his section did the best job they could but a 3/3 was not enough. He said he sometimes had difficulty at his level, for example, when participating in radio interviews broadcast to local audiences. In addition, consular officers at some of the posts we visited questioned whether a proficiency level of 2 in speaking was sufficient for conducting visa interviews. They said they could ask questions but did not always understand the answers and sometimes had to rely on locally engaged staff to translate. HR officials explained that a position may be classified at 2 when, in reality, a higher level of proficiency is needed. For example, proficiency requirements for untenured positions in certain languages cannot be higher than 2 because of the limits on training for untenured officers. State uses a combination of language training—at FSI, at advanced language institutes overseas, and through each post’s language program— recruitment of officers fluent in foreign languages, and incentive pay to meet its language requirements. State primarily uses language training, typically at FSI, to meet its foreign language requirements. FSI’s School of Language Studies offers training in about 70 languages. State also offers full-time advanced training in superhard languages at a few overseas locations, including Beijing, China; Cairo, Egypt; Seoul, South Korea; Taipei, Taiwan; Yokohama, Japan; and Tunis, Tunisia. In addition, overseas posts offer part-time language training through post language programs and FSI offers distance learning courses to officers overseas. Finally, FSI offers overseas and domestic mid-course opportunities in many languages, including programs in countries such as Turkey, Russia, and Israel, including activities such as classroom study overseas, field trips, and home visits with local families. These immersions serve either as a substitute for some portion of the Washington training or as a complement or refresher to enhance the learner’s ability to achieve a higher degree of facility in dealing with the local community and to increase the return on the department’s training investment. State measures the effectiveness of its training in a variety of ways; however, concerns about several aspects of FSI training persist. State collects data and reports on the percentage of students who attain the intended proficiency level in all critical languages when they are enrolled in language training for at least the recommended length of training as an indicator of the success of FSI training. For 2008, State reported a language training success rate of 86 percent. State also tracks overall satisfaction with all training at FSI and reported a 94 percent satisfaction rate for fiscal year 2008. Officials we met with overseas, however, expressed mixed experiences with FSI language training. For example, consular officers in Istanbul described the FSI training as outstanding. Entry-level officers in Cairo said that instruction at the beginning levels at FSI is very good, but that FSI is not well equipped for beyond-3 training. However, FSI officials explained that because there are only 2 4/4 language-designated positions in the department, there is almost no formal requirement for FSI to provide such training. FSI officials also stated that without a mandate or the necessary resources, FSI provides beyond-3 training on an ad hoc basis. A few officers questioned the relevance of the foreign language training that they received to their jobs. Several officers also stated that they were not aware of a formal mechanism for them to provide feedback on this issue to FSI. A few officers said that they provided feedback to FSI, but they were not sure if their concerns were addressed. FSI officials stated that FSI provides several opportunities for feedback. For example, the institute administers a training impact survey eliciting the respondent’s opinion of the effectiveness of the training for the respondent’s job several months after it is completed. However, the response rate for this survey has been low: for 2005, State received 603 of 1,476 possible responses; for 2006, 404 of 1,450 possible responses; and for 2007, 226 of 1,503 possible responses. FSI officials said that another opportunity for feedback is the evaluation students complete at the end of every class. State also recruits personnel with foreign language skills through special incentives offered under its critical needs language program; however, some officials noted the department believes it is easier to train individuals with good diplomatic skills to speak a language than it is to recruit linguists and train them to be good diplomats. Under the critical needs program, State offers bonus points for applicants who have passed the Foreign Service exam and demonstrate mastery in a foreign language. The additional points can raise the applicant’s ranking on the Foreign Service registry, improving the chances of being hired. Officers recruited for their proficiency in supercritical and critical needs languages are obligated to serve at an overseas post where they can use the language during their first or second tour. Officers recruited since 2008 are also required to serve at a post where they can use the language a second time as a midlevel officer. The effects of this program on State’s language proficiency gaps are unclear, in part because State has not established numerical targets for its critical needs hiring and has not yet performed an assessment of its effectiveness. An Office of Recruitment official, who was involved in the development of the list, stated that the department could not yet assess the program’s effectiveness because the program, which started in 2004, is still new and the department does not have sufficient data to perform such an assessment. The official pointed out that there have been only about five hiring cycles since it started. However, State data show the department has recruited 445 officers under the program since 2004, and about 94 percent of these officers who have had at least two assignments have completed their obligation to serve at an overseas post where they were able to use the language. A total of 19 officers that have either served two tours or at least have the second tour onward assignment arranged have definitively not filled the obligation and most of those were due to medical or security reasons. The Office of Recruitment official said that since the requirement for the second tour for midlevel officers is still new, there are few, if any, officers recruited under the critical needs program who have reached the middle level. State also does not have a formal schedule for reviewing and adding or removing languages from the list of critical needs languages. Officials from the Office of Recruitment said the list has been reviewed informally and Japanese was removed because State is hiring sufficient numbers of Japanese-speaking officers and there are few entry-level language- designated positions at Japanese posts. State also offers bonus pay to members of the Foreign Service with proficiency in certain languages under the Language Incentive Pay program. To qualify for language incentive pay, officers must have a proficiency of at least a 3/3 (for generalists) or 2/2 (for specialists) be serving in any position (either language designated or non-language designated) at a post abroad where a language currently on the list of incentive languages is a primary or primary-alternate language, or in any language-designated position requiring an incentive language. The incentive pay varies according to the officer’s salary and tested scores. For example, an officer with a 3/3 in Turkish in a language-designated position in Istanbul would be eligible for a bonus of 10 percent of the base salary abroad of an FS-01/step 1 member of the Foreign Service. State has not measured the impact of the pay incentive on increasing foreign language proficiency, and the officers we met with expressed mixed opinions on the effectiveness of the program. For example, a few officers said it is difficult and takes a long time to advance from a 2 to a 3 to qualify for the incentive, while others said the pay was a very good incentive. Others offered suggestions for improvement. For example, one officer said the requirements for the language incentive program discourage some people from participating and that State should provide incentives for people in increments, for example, for going from a 2 to 2- plus. He also suggested that State provide incentives separately for speaking and reading, because it takes time to increase proficiency in reading, which is often not needed for the officer to perform his or her job. HR and FSI officials said that State is considering proposals to improve the incentive pay program. According to senior State officials, the primary challenge State faces in meeting its foreign language requirements is the department’s continued staffing shortages. Specifically, State’s lack of a sufficient training float has limited the number of officers available for language training. As a result, State has had to choose between assigning an officer to post who may not have the requisite language skills or allowing the position to remain empty while the incoming officer is in language training. As noted above, in October 2008, 334 language-designated positions (9 percent of all language-designated positions) were vacant in addition to 1,005 positions that were filled by officers who did not meet the language requirement for the position. For example, in fiscal year 2006, State’s Director General was unable to fill a request by the embassy in Riyadh for two additional language-proficient officers, as recommended by the Inspector General, because of overall staffing shortages. Furthermore, a 2008 report on State resource issues noted that personnel shortages result in training lags, and that ongoing tension over whether staff should complete training assignments or fill positions complicate efforts to create a well-trained workforce. Despite these overall staffing shortages, State has doubled the number of language-designated positions overseas since 2001. Department officials noted that the recent increase in positions requiring a superhard language—that is, one that requires 2 years of training to reach the 3 level—and the number of 1-year tours in these positions have compounded these shortages. For example, State must budget three people for a 3/3 Arabic language-designated position in Riyadh, which is typically a 1-year tour: one to fill the position, one in the second year of language training to arrive at post the next year, and one in the first year of training to arrive the following year. Other staffing-related challenges include the following: Staff time. In some cases, Foreign Service officers lack the time necessary for maintaining their language skills upon arriving at post. Officers we spoke to in Tunis, Ankara, and Cairo said that they do not have enough time in their schedule to fully utilize the post language program. In addition, in 2006, State’s Inspector General reported that most political and economic officers in Kabul find that a routine 6-day workweek precludes rigorous language training. Curtailments. When officers cut short their tours in a language-designated position, there is often no officer with the requisite language skills available to fill the position. Some officers we spoke to said that in some cases, they had to cut short their language training to come to post earlier than expected in order to fill a position vacated by an officer who had curtailed. For example, the regional security officers in Ankara and Tunis said that they left language training after only a few months in order to replace officers who had curtailed to Iraq or elsewhere. In addition, several officers in Shenyang said that they had to leave language training early in order to fill gaps at post. Position freeze. In recent years, State has left dozens of positions vacant— or “frozen” them—in order to fully staff missions in Iraq and Afghanistan. Officers at several posts we visited said that in order to avoid further shortages at post, the geographic bureaus, at times, have chosen to freeze training positions, rather than overseas positions. Consequently, there is no officer currently in language training for these positions, and posts will either have to request a language waiver or hope that the incumbent already has language skills when filling the position. In 2009, State received funding for an additional 450 positions, including 300 dedicated to language training. According to the department, these positions will help to increase the training float and reduce gaps at post while officers are in language training. State officials have said that if their fiscal year 2010 request for an additional 200 training positions is approved, they expect to see language gaps close starting in 2011; however, State has not indicated when its foreign language staffing requirements will be completely met, and previous staffing increases have been consumed by higher priorities. For example, in 2003, State officials stated that the increased hiring under the department’s Diplomatic Readiness Initiative would create a training float to help eliminate the foreign language gaps at overseas posts within several years. Although the initiative enabled State to hire more than 1,000 employees above attrition, it did not reduce the language gaps, as most of this increase was absorbed by the demand for personnel in Iraq and Afghanistan, and thus the training reserve was not achieved. Another challenge to State’s efforts to address its language shortfalls is the persistent perception among Foreign Service officers that State’s promotion system undervalues language training; however, while HR officials told us that the system values language training, the department has not conducted a systematic assessment to refute the perceptions. Officers at several posts we visited stated a belief that long-term training, specifically advanced training in hard languages, hinders their promotion chances. For example, officers in Beijing said that some officers are reluctant to study a foreign language that requires a 1- or 2-year commitment because they believe it makes them less competitive for promotion, and one officer said that she would not have bid on her current position if she had had to take Chinese first. A former ambassador told us that many officers feel that language training is a “net minus” to their careers, as the department views this as a drain on the staffing system. We reported similar sentiments in 2006, when several officers said they believed that State’s promotion system might hinder officers’ ability to enhance and maintain their language skills over time. Although senior HR officials told us that the promotion system weighs time in training as equal to time at post, they acknowledged that officers applying for promotion while in long-term training were at a disadvantage compared with officers assigned to an overseas post. Although promotion boards are required by law to weigh end-of-training reports for employees in full-time language training as heavily as the annual employee evaluation reports, officers in Beijing, Shenyang, Istanbul, and Washington expressed concern that evaluations for time in training were discounted. State officials said they have reviewed the results of one promotion board and found a slightly lower rate of promotions for officers in long-term training at the time of the review. However, these officials were not sure if these results were statistically significant and said that the department has not conducted a more systematic assessment of the issue. State’s approach to addressing its foreign language proficiency requirements does not reflect a comprehensive strategic approach. As we previously mentioned, State considers staffing shortfalls and the lack of a training float to be the primary challenges to achieving the department’s language proficiency requirements. However, prior work by GAO and others has shown that addressing a critical human capital challenge—such as closing or reducing the long-running foreign language proficiency gaps within State’s Foreign Service corps—requires a comprehensive strategic plan or set of linked plans that sets a clear direction for addressing the challenge. GAO, OPM, and others have developed a variety of strategic workforce planning models that can serve as a guide for State to develop a comprehensive plan to address its language proficiency gaps. Common elements of these models include setting a strategic direction that includes measurable performance goals and objectives and funding priorities, determining critical skills and competencies that will be needed in the future, developing an action plan to address gaps, and monitoring and evaluating the success of the department’s progress toward meeting goals. In 2002, we reported that State had not prepared a separate strategic plan for developing its foreign language skills or a related action plan to correct long-standing proficiency shortfalls and recommended that the department do so. State responded by noting that because language is such an integral part of the department’s operations, a separate planning effort for foreign language skills was not needed. During this review, State officials told us that a comprehensive strategic approach to reducing foreign language gaps would be useful. The officials mentioned a number of documents where the department has addressed State’s foreign language proficiency requirements in various forms, including the Foreign Language Continuum, the Strategic Plan, a 2007 training needs assessment, and the Five-Year Workforce Plan, but acknowledged that these documents are not linked to each other and no one document contains measurable goals, objectives, resource requirements, and milestones for reducing the foreign language gap. We reviewed these documents and found that while some include a few of the aforementioned elements of a strategic plan, none of the documents present a comprehensive plan for State to address its foreign language proficiency requirements. For example, the Foreign Language Continuum—a document developed by FSI for FSOs—describes foreign language training opportunities provided by State and, according to FSI officials, was meant to serve as a guide for FSOs and not a plan for reducing language gaps. The joint State-U.S. Agency for International Development (USAID) Strategic Plan contains seven priority goals for achieving State’s and USAID’s overall mission but only tangentially addresses the issue of foreign languages by stating that the department will expand opportunities for classroom training and distance learning in a number of areas, including foreign languages. It does not discuss if and how expanding this training will contribute to reducing the department’s language proficiency gaps, or establish measurable goals, objectives, or time frames for its performance. The training assessment—a 2007 training study conducted by HR and FSI to assess State’s current and future training needs—identified additional positions to be requested in future budget justifications to increase the training float. State’s Five-Year Workforce Plan, which describes the department’s overall workforce planning, including hiring, training, and assignment plans, is a step in the right direction. The plan addresses language gaps in the Foreign Service workforce to a greater extent than any of the other documents. However, the plan falls short in several respects. First, the document states that State has established an ongoing monitoring process to identify and set goals for reducing language skill gaps in the Foreign Service. This process resulted in the development of an officer-to-position ratio target of at least 2.5 officers with the required language proficiency for each language-designated position at the 3/3 proficiency level. State reports this ratio as a target for meeting its critical needs language requirements; however, the ratio is not based on quantitative analysis but on the consensus of a working group consisting of HR and FSI officials. In developing the ratio, State assumed that the 2.5 officers already have the required languages and did not link the ratio to the number of officers that should be in language training and the size of the training float needed to achieve the 2.5 ratio. Further, State assumed that 3/3 is the appropriate skill level for the positions, although, as we discussed earlier, some officers have questioned the validity of that level for certain positions. Moreover, an HR official responsible for workforce planning at State said that the 2.5 ratio is very broad and not sufficiently detailed or specific. For example, the ratio does not take into account the different tour lengths. More Arabic-speaking officers would be needed for 1-year tours than Russian speakers for 3-year tours, so the languages should not have the same target ratio. Also, the assessment treats Foreign Service officers at all levels equally, even though more senior officers would not fill lower- graded positions. Therefore, even if State achieved the 2.5 ratio for each language-designated position, not all of the language-designated positions would be filled. The HR official explained that State is in the process of improving its methodology for critical needs language assessment. Despite the various measures that State uses to determine and fill its language-designated positions, it continues to experience persistent gaps in its foreign language skills at many posts around the world, and questions remain about the adequacy of the proficiency requirements. State recognizes the importance of staffing language-designated positions with FSOs who possess the requisite language skills to perform their duties, and has taken some measures intended to address its foreign language shortfalls, including requesting and receiving funding in 2009 to build a training capacity, establishing a career development program that requires FSOs to have sustained professional language proficiency for consideration for promotion into the senior ranks, and offering special incentives to attract speakers of foreign languages under its critical needs language program. However, these individual actions, which State has relied on for several years to address its language proficiency requirements, do not constitute a comprehensive strategic approach to addressing the department’s persistent gaps in language proficiency within the Foreign Service, and they are not linked to any targets, goals, or time frames for reducing State’s language gaps. Also, State is not fully assessing the progress of its efforts toward closing the language gaps. Actions described in State’s Five-Year Workforce Plan, such as the department’s attempt to establish an ongoing monitoring process to identify and set goals for reducing the language skill gaps, are a step in the right direction that could be built upon to develop a more comprehensive plan. Given the importance of foreign language competency to the mission of the Foreign Service, any measures taken to address State’s language proficiency shortfalls should be part of a comprehensive strategic plan that takes a long-term view and incorporates the key elements of strategic workforce planning. Such a plan will help State guide its efforts to monitor and assess its progress toward closing its persistent foreign language gaps. To address State’s persistent foreign language proficiency shortfalls in the U.S. Foreign Service, this report is making two recommendations. We recommend that the Secretary of State develop a comprehensive strategic plan consistent with GAO and OPM workforce planning guidance that links all of State’s efforts to meet its foreign language requirements. Such a plan should include, but not be limited to, the following elements: clearly defined and measurable performance goals and objectives of the department’s language proficiency program that reflect the priorities and strategic interests of U.S. foreign policy and diplomacy; a transparent, comprehensive process for identifying foreign language requirements, based on objective criteria, that goes beyond the current annual process, to determine which positions should be language designated and the proficiency level needed to enable officers to effectively perform their duties; and a more effective mechanism that allows State to gather feedback from FSOs on the relevance of the foreign language skills that they acquired at FSI to their jobs, and mechanisms for assessing the effectiveness of State’s recruitment of critical needs foreign language speakers, and language incentive payments, as well as future efforts toward closing the department’s language proficiency gaps. To more accurately measure the extent to which language-designated positions are filled with officers who meet the language requirements of the position, we also recommend that the Secretary of State revise the department’s methodology in its Congressional Budget Justifications and annual reports to Congress on foreign language proficiency. Specifically, we recommend that the department measure and report on the percentage of officers in language-designated positions who have tested at or above the level of proficiency required for the position, rather than officers who have been assigned to language training but who have not yet completed this training. State provided written comments on a draft of this report. The comments are reprinted in Appendix II. State generally agreed with the report’s findings, conclusions, and recommendations and described several initiatives that address elements of the recommendations. In further discussions with State to clarify its response, an official of HR’s Office of Policy Coordination stated that State agrees with GAO that it needs some type of plan or process to pull together its efforts to meet its foreign language requirements, but that it has not yet determined what form this action will take. The official further explained that State recently convened an inter-bureau language working group, which will focus on and develop an action plan to address GAO’s recommendations. State also provided technical comments, which we have included throughout this report as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Secretary of State and interested congressional committees. The report also is available at no charge on the GAO Web site at http://www.gao.gov. In this report, we (1) examine the extent to which State is meeting its foreign language requirements and the potential impact of any shortfalls on U.S. diplomacy, (2) assess State’s efforts to meet its foreign language requirements and describe the challenges it faces in doing so, and (3) assess the extent to which State has a comprehensive strategy to determine and meet these requirements. To analyze the extent to which State is meeting its foreign language requirements, we obtained data from State on all overseas language- designated positions and the language skills of the incumbent filling the position as of October 31, 2008. We compared the incumbent’s reading and speaking scores with the reading and speaking levels designated for the position, and determined that the incumbent met the requirements for the position only if his or her scores equaled or exceeded both the speaking and reading requirements. A limited number of positions are designated in two languages. We determined that the officer met the requirements of such positions if he or she met the speaking and reading requirements for at least one of the designated languages. We also interviewed State officials responsible for compiling and maintaining these data and reviewed data maintained by some of the posts we visited on their language-designated positions, and determined the data to be sufficiently reliable for identifying the number of language-designated positions filled by officers who met the requirements of the position. To assess the potential impact of foreign language shortfalls on U.S. diplomacy, we reviewed previous GAO reports, as well as reports by State’s Inspector General, the National Research Council, the Congressional Research Service, the Department of Defense, and various think tanks. We interviewed officials from State’s Bureaus of African Affairs, Consular Affairs, Diplomatic Security, European Affairs, Human Resources, East Asian and Pacific Affairs, Near Eastern/South and Central Asian Affairs, Public Affairs, and Western Hemisphere Affairs, and the Foreign Service Institute. We also interviewed officials at overseas posts in Beijing and Shenyang, China; Cairo and Alexandria, Egypt; New Delhi, India; Tunis, Tunisia; and Ankara and Istanbul, Turkey. We selected these posts based on the number of language-designated positions in supercritical (e.g., Arabic, Chinese, and Hindi) or critical needs (e.g., Turkish) languages, the extent of language gaps, and the location of FSI field schools. We also met with former senior State officials, including former ambassadors to Russia, Afghanistan, and Armenia; a former dean of FSI’s School of Language Studies; and the former acting Director General of the Foreign Service to gain their insights on the consequences of language shortfalls at overseas missions. In total, we interviewed about 60 officials in Washington, D.C., and over 130 officers overseas. To assess how State determines and meets its foreign language requirements, we reviewed past GAO reports; State planning documents, including the strategic plan, the performance report, and budget justification; State cables on the language designation process; and workforce planning guidance. We also interviewed State officials in Washington, D.C., and at overseas posts. To describe the challenges that State faces in meeting its foreign language requirements, we reviewed State department budget and planning documents. We analyzed State’s promotion precepts, Career Development Program, and instructions provided to Foreign Service promotion boards. We also interviewed State officials in Washington, D.C., and at overseas posts. To assess the extent to which State has a comprehensive strategy to determine and meet its foreign language requirements, we reviewed prior GAO reports on strategic workforce planning and State planning documents, including the department’s strategic plan, the Language Continuum, and the Five-Year Workforce Plan. We compared State’s planning efforts to reduce foreign language gaps with guidance on comprehensive workforce planning developed by GAO and the Office of Personnel Management. We also interviewed officials from the Bureau of Human Resources and others. We conducted this performance audit from August 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Godwin Agbara, Assistant Director; Robert Ball; Joseph Carney; and La Verne Tharpes made key contributions to this report. Martin de Alteriis and Elizabeth Singer provided technical assistance.
Proficiency in foreign languages is a key skill for U.S. diplomats to advance U.S. interests overseas. GAO has issued several reports highlighting the Department of State's (State) persistent foreign language shortages. In 2006, GAO recommended that State evaluate the effectiveness of its efforts to improve the language proficiency of its staff. State responded by providing examples of activities it believed addressed our recommendation. In this report, which updates the 2006 report, GAO (1) examined the extent to which State is meeting its foreign language requirements and the potential impact of any shortfall, (2) assessed State's efforts to meet its foreign language requirements and described the challenges it faces in doing so, and (3) assessed the extent to which State has a comprehensive strategy to determine and meet these requirements. GAO analyzed data on State's overseas language-designated positions; reviewed strategic planning and budgetary documents; interviewed State officials; and conducted fieldwork in China, Egypt, India, Tunisia, and Turkey. As of October 31, 2008, 31 percent of Foreign Service officers in overseas language-designated positions (LDP) did not meet both the foreign languages speaking and reading proficiency requirements for their positions. State continues to face foreign language shortfalls in regions of strategic interest--such as the Near East and South and Central Asia, where about 40 percent of officers in LDPs did not meet requirements. Despite efforts to recruit individuals with proficiency in critical languages, shortfalls in supercritical languages, such as Arabic and Chinese, remain at 39 percent. Past reports by GAO, State's Office of the Inspector General, and others have concluded that foreign language shortfalls could be negatively affecting U.S. activities overseas. Overseas fieldwork for this report reaffirmed this conclusion. State's approach to meeting its foreign language requirements includes an annual review of all LDPs, language training, recruitment of language-proficient staff, and pay incentives for language skills. For example, State trains staff in about 70 languages in Washington and overseas, and has reported a training success rate of 86 percent. Moreover, State offers bonus points for language-proficient applicants who have passed the Foreign Service exam and has hired 445 officers under this program since 2004. However, various challenges limit the effectiveness of these efforts. According to State, a primary challenge is overall staffing shortages, which limit the number of staff available for language training, as well as the recent increase in LDPs. State's efforts to meet its foreign language requirements have yielded some results but have not closed persistent gaps and reflect, in part, a lack of a comprehensive, strategic approach. State officials have said that the department's plan for meeting its foreign language requirements is spread throughout a number of documents that address these needs; however these documents are not linked to each other and do not contain measurable goals, objectives, or milestones for reducing the foreign language gaps. Because these gaps have persisted over several years despite staffing increases, we believe that a more comprehensive, strategic approach would help State to more effectively guide its efforts and assess its progress in meeting its foreign language requirements.
Maintaining an overseas military presence that is prepared to deter threats and engage enemies remains an enduring tenet of U.S. national military strategy and priorities. For example, the National Military Strategy notes that an overseas presence supports the ability of the United States to project power against threats and support establishment of an environment that reduces the conditions that foster extremist ideologies. By being forward-deployed, maritime forces can enable familiarity with the environment and behavior patterns of regional actors. The Navy has traditionally maintained overseas presence by using standard deployments whereby individual ships and their permanently assigned crews are deployed for approximately 6 months out of a 27-month cycle. However, the amount of time a ship ultimately spends forward-deployed in a theater of operations is affected by several factors in its employment cycle. These factors include length of deployment, transit speeds to and from operating areas, port calls, crew training and certification, ship maintenance requirements, and maintaining sufficient readiness for surging forces during nondeployed periods. The result is that a ship homeported in the United States and deploying to the Persian Gulf area for 6 months will normally spend less than 20 percent of its 27 month cycle in-theater and that the Navy would need about six ships to maintain a continuous presence in the region over a 2-year period. Rotational crewing has been proven to provide greater forward presence for Navy ships by eliminating ship transits and maintaining more on- station time in distant operating areas. Specifically, the 2004 Pacific Fleet Destroyer Sea Swap initiative demonstrated that rotational crewing provides more forward presence with fewer ships. For example, one Pacific Fleet destroyer, rotationally crewed with three sequentially- deployed crews, produced an additional 16 days of forward presence compared with a standard four-ship/four-crew deployment. The Atlantic Fleet DDG Sea Swap initiative produced similar results. For example, one Atlantic Fleet destroyer, rotationally crewed with three crews, produced 25 days more of forward presence than a standard four-ship/four-crew deployment. Assessments completed by the Center for Naval Analyses and the Office of the Chief of Naval Operations confirmed the results of the Pacific and Atlantic Sea Swap initiatives. Using the Blue-Gold alternative, the HSV-2 Swift has achieved an operations tempo of more than 80 percent and the four newly converted guided missile submarines expect to spend two-thirds of their operational cycles forward-deployed in the operations area. At costs ranging from $500 million to $5 billion each, the Navy’s surface combatants represent a significant capital investment. Facing cost growth in new ship classes and federal fiscal challenges, rotational crewing may be one alternative the Navy could utilize to meet mission requirements and mitigate the effects of cost growth on ship requirements as embodied in the Navy’s long-range shipbuilding plan and maritime strategy. The Congressional Budget Office and Center for Naval Analysis have also noted the procurement savings achieved as a result of using rotational crewing on ships. In 2007, the Chief of Naval Operations recognized the challenge of accomplishing the Navy’s missions within its budget. The Chief of Naval Operations explained that there is extraordinary pressure to balance the Navy’s personnel, operations, and procurement accounts in today’s fiscal environment. Meanwhile, the Navy has faced increased criticism for rising shipbuilding costs. The increasing cost of surface ships has led the Navy to reduce procurements, and the resulting loss of economies of scale has driven costs of individual surface ships even higher. We have reported that significant cost growth and long schedule delays are persistent problems in both new and follow-on ships. We also reported that the Navy has developed and implemented several initiatives to increase the operational availability of Navy and Marine Corps fleet forces, including the Fleet Response Plan and rotational crewing. Navy officials have cited these initiatives as ways to increase readiness and reduce the numbers of ships needed in the Navy’s force structure, thereby freeing funding for other priorities. Decisions made in setting requirements very early in a ship’s development have enormous effect on the cost of the system over its life. Life-cycle costs include the costs to research, develop, acquire, own, operate, maintain, and dispose of weapon and support systems. These costs include the facilities and training equipment, such as simulators, unique to the system. Navy analyses show that by the second acquisition milestone (which assesses whether a system is ready to advance to the system development and demonstration phase), roughly 85 percent of a ship’s life- cycle cost has been “locked in” by design, production quantity, and schedule decisions while less than 10 percent of its total costs has actually been expended. (See fig. 1.) Figure 1 depicts the relative apportionment of research and development, procurement, and operating and support costs over the typical life cycle of a ship program (the complete life cycle of a ship, from concept development through disposal, typically ranges from 40 to 60 years). Research and development funds are spent at program initiation and generally constitute only a small fraction of a new ship’s costs. Then, in the next acquisition phase, procurement funds are spent to acquire the new ship. The vast majority of the life-cycle costs is comprised of operating and support costs and is incurred over the life of the ship. Recognizing that fiscal constraints pose a long-term challenge, DOD policy states that life-cycle costs of new military systems should be identified and that all participants shall plan programs based on realistic projections of the dollars and manpower likely to be available in future years. This approach, referred to as treating cost as an independent variable, requires program managers to consider cost-performance trade-offs in setting program goals. During the acquisition process, program managers are held accountable for making progress toward meeting established goals and requirements at checkpoints, or milestones, over a program’s life cycle. These goals and requirements are contained in several key documents, including the initial capabilities document and the analysis of alternatives. An initial capabilities document describes an operational gap or deficiency, or opportunity to provide new capabilities, in operational terms and identifies possible material and nonmaterial solutions, including approaches involving, among other things, personnel and training, that may be used to satisfy the need. These capabilities and constraints are examined during a study called the analysis of alternatives. The DOD instruction outlining the process on how to acquire major weapons systems establishes the requirement for developing an analysis of alternatives to support major acquisition milestones and decision reviews. An analysis of alternatives is a documented analytical evaluation of the performance, operational effectiveness, operational suitability, and estimated costs (including full life-cycle costs) of alternative systems to meet a mission capability that has been identified through the department’s capabilities and requirements process. Preparation of an analysis of alternatives is generally required during the Concept Refinement Phase, which is early in the defense acquisition process—even prior to formal initiation of a program—as shown in figure 2. An analysis of alternatives is required at an early stage to ensure that all potential alternative means of satisfying the stated capability are considered. The analysis of alternatives assesses the advantages and disadvantages of various alternatives being considered to satisfy the needed capability, including the sensitivity of each alternative to possible changes to key assumptions (e.g., threat) or variables (e.g., performance capabilities). The analysis is intended to aid decision makers in judging whether or not any of the proposed alternatives to an existing system offer sufficient military or economic benefit, or both, to be worth the cost. In preparation for subsequent milestones, the analysis is updated, or a new one conducted, depending on then-existing circumstances. Additionally, the Department of the Navy has issued guidance containing mandatory procedures for implementation of DOD’s acquisition instruction and process. The Navy’s guidance requires an analysis of alternatives to include an analysis of doctrine, organization, training, materiel, management, leadership, personnel, and facilities as well as joint implications. initiation) In addition to the standard ship and crew employment cycle, the range of Navy crewing alternatives falls into three major categories: (1) Sea Swap, (2) Horizon, and (3) Blue-Gold. Each of these alternatives can be implemented in varying ways and may have different advantages and disadvantages and effects on life-cycle costs, but the Navy’s actual experience with nonstandard crewing alternatives on surface ships is limited. Sea Swap is the only crewing alternative that has been used on ships as large as surface combatants. Standard crews use one crew per ship. Most of the crewmembers are assigned to the ship for 4 years, and it is common for crewmembers to deploy overseas on the same ship more than once. Ships deploy to forward operating areas for periods of 6 or more months on average. On a 6 month deployment to the Arabian Gulf ships spend 3 to 4 months of that deployment actually on station depending on whether the ship deploys from the east or west coast of the United States. When not deployed, the ships fulfill surge deployment requirements, undergo maintenance availabilities and conduct training and certifications to maintain mission capability. Most Navy ships and their crews employ the standard crew deployment option. The Sea Swap option uses one deploying ship but multiple sequentially deploying crews. Newly deploying crews swap ships with the crew on the forward-deployed ship. Nondeployed crews train and perform maintenance on a ship in the home port. Sea Swap normally operates in a multiple of two, three, or four ships and crews. The crews rotate through the ships in the assigned group. Notionally under this option, one of the ships deploys two, three, or four times longer than the standard time by rotating crews every 6 months at an overseas location. Ideally, all of the Sea Swap ships share an identical configuration, so crew performance and capability are not degraded because of ship differences. Because crews do not return to the ships on which they trained, under a four-ship Sea Swap option, some crews could serve on three different ships in just over 6 months and be expected to demonstrate combat proficiency on each one. A limited number of destroyers have employed the Sea Swap option in recent years. The Horizon option involves one or two more crews than ships, such as four crews for three ships or five crews for three ships. Crews serve for no more than 6 months on ships that are deployed for 18 months or more. Under a three-ship Horizon option, crews could serve on at least two ships in just over 6 months and be expected to demonstrate combat proficiency on each one. In addition, each crew would be without a ship for a period of time and stay ashore at a readiness, or training, center. This crewing option was employed on mine countermeasure and patrol coastal ships in recent years. The Blue-Gold option assigns two complete crews, designated “Blue” and “Gold,” to a single ship. Most of the crewmembers are assigned to a ship for several years, and it is common for them to deploy overseas on the same ship more than once. Crew deployments would not exceed 6 months and are often of much shorter duration. An advantage with this option includes the crews’ familiarity with the ship. However, a disadvantage is that the proficiency can degrade since crews sometimes do not have a ship on which to train when not deployed and must rely on mock-ups and simulators at a training facility. The strategic and guided missile submarine forces and the HSV-2 Swift have employed the Blue-Gold alternative. Rotational crewing has been a part of the Navy for over 40 years, but the Navy’s experience with this crewing concept on its surface fleet has been more recent and limited to a small number of ships and ship types. The Navy has used the Blue-Gold crewing approach on its ballistic missile submarines since the 1960s; however, until the mid-1990s, rotational crewing was not practiced on surface ships. In the mid-1990s, the Navy was in search of a new operational approach that allowed the Navy to meet forward-presence requirements and surge capability. The Navy developed the Horizon approach that sustained readiness by maintaining people and platforms in a continually ready state. This concept was originally used on Mine Countermeasure ships in the mid-1990s, and was later adopted by coastal minehunter and patrol coastal ships in 2003. In the same year, the Navy employed the Blue-Gold rotational crewing approach on the HSV-2 Swift. Beginning in 2007 with the U.S.S. Ohio’s deployment as a guided missile submarine, the Navy has implemented the Blue-Gold rotational crewing alternative on the four Ohio-class strategic missile submarines converted to guided missile submarines. Rotational crewing experiments have also been conducted on Navy destroyers in the Pacific and Atlantic Fleets. Beginning in 2002, seven Pacific Fleet destroyers and their crews participated in the Sea Swap rotational crewing demonstration. This rotational crewing approach was tested again in 2005, this time using three of its 22 Atlantic Fleet destroyers in what is known as the Atlantic Fleet DDG Sea Swap initiative. Rotational crewing has not been used on the Navy’s cruisers, amphibious ships, aircraft carriers, or support ships (other than the HSV-2 Swift). Table 1 shows the rotational crewing alternatives employed by the Navy during the 1990s and through the present. Although the Navy has taken action to provide leadership in specific rotational crewing programs and transform its ship-crewing culture, the Navy has not fully established a comprehensive management approach to coordinate and integrate rotational crewing efforts throughout the department. Specifically, the Navy has not fully incorporated key management practices to manage the transformation of the Navy’s ship- crewing culture—such as providing top-down leadership and dedicating an overarching implementation team—that our prior work has shown critical to successful transformations. Rotational crewing represents a transformational cultural change for the Navy. An organization’s culture encompasses the values and behaviors that characterize its work environment. The Navy has a long history devoted to the one crew, one ship model whereby individual ships and their permanently assigned crews are deployed approximately 6 months out of a 27-month cycle. Rotational crewing on surface ships is a relatively new concept for the Navy, with only one use before 2002. Sailors in several focus groups told us that rotational crewing stands in stark contrast to the normal deployment cycle of the Navy. They added that, in order to be successful, the Navy’s crewing culture would have to be transformed. Then–Chief of Naval Operations Admiral Vern Clark echoed this message in 2005, stating that rotational crewing has changed the face of the Navy, and that in any organizational transformation, people are almost always not in favor of change. If rotational crewing efforts are not properly managed, rotational crewing can have a negative effect on mission performance and retention. For example, we reported in 2004 that the Pacific Sea Swap experiments lacked proper management, including effective guidance and oversight. Focus groups with Pacific Sea Swap sailors reported training deficiencies, increased maintenance tasks, and a degraded quality of life. Further, lower reenlistments rates were found for sailors with less than 6 years of service. Successful rotational crewing efforts require management practices that lead a transformation of the Navy’s ship-crewing culture. While the Navy has provided leadership in some specific rotational crewing programs, the Navy has not provided top-down leadership to manage and integrate all rotational crewing efforts throughout the Department of the Navy. We reported in 2003 that key practices and implementation steps for successful transformations include ensuring top leadership drives the transformation. The Commander, Naval Surface Forces, has been clearly and personally involved in leading the transformation of the Navy’s ship-crewing culture in the implementation of Littoral Combat Ship (LCS) rotational crewing. The Commander has set the direction, pace, and tone for the transformation, while institutionalizing accountability. For example, the Commander has instituted a set of cardinal rules that emphasize seizing the opportunity and embracing change as part of the transformation. One of these cardinal rules is not to compare the LCS to legacy platforms because the LCS cannot be manned, trained, equipped, maintained, or tactically employed in the same way. Further, the Commander has presented a clear and compelling picture of what the LCS community needs to achieve, helping to build morale and commitment to the rotational crewing concept. For example, the Commander has articulated a succinct and compelling reason for adopting rotational crewing, demonstrating conviction to making the change. Command officials explained that this has helped sailors and personnel throughout the LCS and Surface Forces command understand and share the Commander’s expectations, engendering both their cooperation and ownership of these outcomes. In addition, the Vice Chief of Naval Operations provided top-down leadership in the Atlantic Fleet DDG Sea Swap initiative, recognizing shortcomings in the Pacific Sea Swap initiative. Citing recommended actions in our 2004 report on the Pacific Sea Swap, the Vice Chief of Naval Operations directed Naval Surface Forces Atlantic to develop goals, standardized guidance, metrics, and a comprehensive strategy for future rotational crewing initiatives. This transformational leadership, however, has been limited to the implementation of the LCS and Atlantic Fleet DDG Sea Swap rotational crewing efforts. The Navy has not provided top-down, sustained leadership to manage and integrate all rotational crewing efforts. The Chief of Naval Operations has noted the success of rotational crewing programs and their potential to increase forward presence without buying more ships. However, with six rotational crewing efforts currently underway, Navy leadership has not assigned clear leadership and accountability for managing rotational crewing efforts, including designating responsibility for integrating and applying program results to the fleet, an action necessary to guide the transformation of the Navy’s ship-crewing culture. For example, the Atlantic Fleet DDG Sea Swap initiative successfully increased forward presence and generated total operational cost savings of nearly $10 million. However, Fleet Forces Command, in its final report on the Atlantic Fleet DDG Sea Swap initiative, stated that no future Sea Swaps are planned. The report states that only if an expansion of missions and roles for the destroyer class (such as the addition of a missile defense capability) decreased the total number of destroyers available, would rotational crewing be considered. According to Navy sailors and officials, Navy leadership also has not identified incentives for rotational crewing necessary to lead the transformation. Several sailors in focus groups with rotational crews reported that port calls and defined employment periods were critical to successful rotational crewing programs. To date, Navy leadership has not consistently managed these incentives and implemented them in each rotational crewing program. For example, mine warfare ship sailors in focus groups reported that their deployment schedules were unpredictable, resulting in poor quality of life. The Navy does not have top- down leadership because the Navy does not have overarching guidance for rotational crewing that assigns leadership within the Chief of Naval Operations. Without top-down, sustained Navy leadership, including assigning responsibility for managing rotational crewing efforts, the Navy cannot be assured that rotational crewing is developed in an efficient or sustainable manner. Although the Navy has established implementation teams for selected rotational crewing initiatives, it has not established an implementation team for managing all rotational crewing programs. We reported in 2003 that key practices for successful transformations include that an implementation team should be responsible for the day-to-day management of transformation to ensure various initiatives are integrated. Such a team would ensure that rotational crewing receives the focused, full-time attention necessary to be sustained and effective by keeping efforts coordinated, and integrating and applying implementation results to the fleet. The LCS community demonstrates the structure of an implementation team. The LCS team is led by an Oversight Board, chaired by the Commander, Naval Surface Forces, with executive-level representatives from program executive offices, program sponsors, and other major stakeholders. Two cross-functional teams report directly to the Oversight Board: one addresses manning and training issues, and the other addresses logistics and maintenance issues. Additional LCS team members include representatives from the LCS community, Naval Surface Forces Pacific, other appropriate functional disciplines, and a senior level executive working group, the Council of Captains (see fig. 3). Naval Surface Forces officials explained that, together, the implementation team groups review issues and barriers associated with the LCS program and jointly develop solutions. The process is documented in detailed Plans of Action and Milestones that list barriers, solutions, and planning goals. Other rotational crewing initiatives have benefited from implementation teams. For example, Naval Surface Forces established an implementation team to coordinate all involved activities and organizations in the Atlantic Fleet DDG Sea Swap initiative. The team included Naval Surface Forces Atlantic staff from multiple directorates, regional support organization representatives, ship commanding and executive officers, Board of Inspection and Survey members, a public affairs officer, and others. The team ensured that the execution of the initiative ran smoothly and provided a communications structure to facilitate coordination among all participants and support organizations. Submarine Group Trident command officials also benefited from implementation teams in preparing for swapping Blue and Gold crews overseas to support newly converted guided missile submarines. Submarine Group Trident command officials explained that they conducted multiple tabletop exercises to address maintenance support teams, overseas repairs, and travel logistics. Command officials further noted that working groups were formed to address specific challenges associated with forward-deployed crew swaps, such as selecting the type of aircraft to move the crews and procedures for storing spare parts, and to develop a preexercise plan. Drawing on the tabletop exercises, working group preparation, and the preexercise plan, the guided missile submarine U.S.S. Ohio completed the first forward- deployed submarine crew swap in over 15 years, successfully transporting supplies, paperwork, and the crew. Implementation teams, however, have not been utilized in all rotational crewing initiatives. Navy officials explained that no implementation team exists to manage the patrol coastal or mine warfare ship rotational crewing efforts. In focus groups, patrol coastal and mine warfare ship sailors reported poor quality of life, insufficient training and professional development time, inconsistent accountability during ship turnovers, and little, if any, support for the crewing transformation. Without an implementation team to devote focused attention, provide a communication structure, and apply lessons from other rotational crewing efforts, the Navy may not effectively resolve these issues on patrol coastal and mine warfare ships. There are several groups within the Navy with key roles in rotational crewing programs; however, none of these groups has the overall authority, responsibility, and accountability to coordinate and integrate all rotational crewing efforts. For example, Fleet Forces Command serves as the single voice for fleet requirements and coordinates standardized policy for manning, training, and maintaining fleet operating forces. A key strategic priority for Fleet Forces Command is delivering optimal readiness and operational availability of forces at best cost, managed through best practices and shared information supporting informed decisions by Commanders. The Office of the Chief of Naval Operations, Integration of Capabilities and Resources directorate, is responsible for optimizing Navy investments through centralized coordination of Navy warfighting and warfighting support analysis and assessments, Navy capability development and integration, joint and Navy requirements development, and resource programming. Naval Sea Systems Command builds, buys, and maintains the Navy’s ships and submarines and their combat systems, as well as directs resources from program sponsors into the proper mix of manpower and resources to properly equip the fleet. Recently established Class Squadrons are functional command organizations specific to particular ship classes (e.g., Patrol Coastal, LCS) and are responsible for the manning, training, equipping and maintaining processes. Class Squadrons use metric-based analysis to assess readiness, examine class trends, establish lessons learned, and provide recommendations and solutions. Other groups with critical involvement in the implementation of rotational crewing efforts include Naval Surface Forces, Naval Submarine Forces, and many others. However, none of these groups has the overall authority, responsibility, and accountability to coordinate and integrate all rotational crewing efforts because the Navy has not specified how this will be accomplished in an overarching guidance document for rotational crewing. Without formally designating an overarching implementation team with diverse representation to provide day-to-day management oversight of rotational crewing efforts, the Navy can not be assured that rotational crewing programs will be coordinated and integrated, and their results applied to the rest of the fleet. As a result, the Navy may fail to lead a successful transformation of its ship-crewing culture. The Navy’s development, dissemination, and implementation of rotational crewing guidance has been inconsistent, which could hinder rotational crewing efforts. The Navy has not developed an overarching directive that provides high-level vision and guidance for rotational crewing initiatives and has been inconsistent in addressing rotational crewing in individual ship-class concepts of operations. However, the Navy has developed and promulgated crew-exchange instructions that have provided some specific guidance for crew turnovers and increased accountability. The Navy has not developed and promulgated an overarching directive that provides the high-level vision and guidance needed to ensure that all rotational crewing efforts are effectively managed, thoroughly evaluated, and successfully implemented. Some communities involved in rotational crewing efforts have developed policies and procedures specific to their community; whereas others have implemented rotational crewing without the benefit of these instructions. For example, the Navy established specific policies and procedures for the execution of the Atlantic Fleet DDG Sea Swap initiative. However, as discussed throughout this report, there is no Navy-wide vision or policy on when and why to consider rotational crewing as an alternative; how to develop implementation plans; and how to share and use lessons learned. As a result, rotational crewing has been inconsistently implemented and assessed across the Navy. According to DOD guidance on directives, an overarching directive for rotational crewing should provide essential policy and guidance to achieve the desired outcome and should delegate authority and assign responsibilities. According to Navy guidance, a directive could be used to do a number of things including: assign a mission, function, or task; initiate or govern a course of action or conduct; establish a procedure, technique, standard, guide, or method of performing a duty, function, or operation; and establish a reporting requirement. Without this overarching directive, the Navy may not have the high-level guidance to effectively manage, implement, and evaluate rotational crewing as a means of increasing capabilities and reducing costs. The Navy has inconsistently addressed rotational crewing in concepts of operations for ship classes employing rotational crewing. A concept of operations is an important leadership and management tool because it provides critical high-level information that describes how a set of capabilities may be employed to achieve desired objectives or a particular end state for a specific scenario and identifies with whom, where, and most importantly, how an activity or function should be accomplished, employed, and executed. In addition, determination of these details enables the development of metrics that support rigorous assessment of the real or proposed capabilities. While the guided missile submarine, LCS, and DDG communities relied on a concept of operations, other commands supporting operations conducted by rotationally crewed surface ships have not developed or used a concept of operations. The guided missile submarine community relied on a concept of operations that addressed the platform’s operational capabilities and challenges while indicating the importance of leveraging the existing maintenance and training infrastructure. This concept of operations also described how operational availability would be increased by using two alternating crews and the special factors that need to be considered in a ship’s employment. The Atlantic Fleet DDG Sea Swap Concept of Operations provided stakeholders with a high-level description of the rotational crewing alternative it employed, the principles that drove its execution, the rationale behind key decisions, and the roles and responsibilities of individual decision makers, managers, and leaders involved in its execution. Although the guided missile submarine, LCS, and DDG communities utilized concept of operations, the Patrol Coastal and Mine Countermeasures ship communities lacked the benefit of a concept of operations. While these communities relied on existing policies and procedures to address some aspects of rotational crewing, such as the exchange of command guidance, they did not have a concept of operations that articulated the vision, purpose, and plan for rotationally crewed surface ships and their crews. They also did not benefit from access to the high-level information and guidance needed specifically for rotational crewing to address critical personnel, supply, maintenance, and training issues. During focus group discussions with crewmembers representing both surface-ship communities, discontent was voiced about the lack of training, particularly the lack of advanced schools needed to increase technical proficiency; personnel shortages that affected crew cohesiveness; minimal maintenance support provided by teams overseas; and inadequate supply support that was to deliver critical equipment when it was needed. These inconsistencies in developing concepts of operations that address rotational crewing have occurred because the Navy does not have overarching guidance for rotational crewing and because it has not developed concepts of operations to guide individual rotational crewing initiatives. Without Navy-wide overarching guidance on rotational crewing and individual ship-class concepts of operations to ensure effective management, execution, and evaluation of rotational crewing efforts, current and potential surface ship rotational crewing initiatives may not be efficiently and effectively implemented. As a result, the Navy increases the risk that it will be unable to effectively communicate its vision of this transformational effort, and will be unable to effectively implement, manage, and institutionalize rotational crewing. In February 2005, the Commander of Naval Surface Forces promulgated specific guidance detailing how the crew exchange process should be conducted to ensure accountability during crew exchanges and for individual ship communities to use as a model for developing instructions tailored to their specific needs. By developing, disseminating, and implementing an exchange of command instruction, the Navy recognized that effective guidance is a key management tool needed to overcome challenges associated with change such as rotational crewing on surface ships and to facilitate efficient operations while establishing and maintaining oversight and accountability. The guidance stipulated that (1) the crew exchange process should nominally take 4 days; (2) the crews involved in the transition process should familiarize themselves with turnover guidance well in advance of the actual transition; and (3) when possible, an advance team should complete as much of the turnover process as possible before the crew exchange begins. Additionally, to promote accountability and to ensure that individuals assuming duties on a new ship are properly prepared to discharge their responsibilities, the guidance requires the commanding officer transitioning off the ship to initiate an exchange of command letter that addresses specific issues, including the material condition of the ship; equipment issues and deficiencies noted in casualty reports; inspection results; logistical issues, including the status of shipboard equipment identified in the ship’s consolidated shipboard allowance list; classified material inventories; and supply and budgetary issues affecting the ship’s financial posture. Furthermore, individual commands involved in or preparing to engage in rotational crewing on surface ships also have developed or are in the process of developing guidance, similar in format and content to the Naval Surface Forces crew exchange guidance, but tailored to their specific needs (for example, their unique missions, operations, or equipment). For example, the Mine Warfare Command issued an instruction addressing crew swap checklists to be used during crew rotations conducted aboard HSV-2 Swift. Likewise, Mine Countermeasures Squadron Two issued detailed guidance to address crew rotations occurring aboard Mine Countermeasures Ships, and the Patrol Coastal Class Squadron issued guidance to provide procedures covering crew rotations. These instructions addressed the unique requirements associated with rotationally crewed surface ships by discussing multicrew training, advance correspondence between crews, and training exercises needed to prepare crews to effectively conduct operations within a specific operational area. In addition, LCS squadron officials are overseeing the creation of a combined directives manual containing directives, procedures, and policies that address issues such as the rotational crewing turnover process, training, maintenance, and logistical requirements. The LCS guidance intends to divide responsibilities for those stationed ashore and afloat, define daily operations, promote teamwork, and support continuity of command. These crew exchange instructions have addressed some of the unique requirements associated with rotational crewing, but without overarching guidance and individual ship-class concepts of operations to ensure effective management, execution, and evaluation of rotational crewing efforts, the Navy increases the risk that it will not effectively implement current and future surface-ship rotational crewing initiatives. The Navy has completed some analyses of rotational crewing for its surface ships; however, unlike the Atlantic Fleet DDG Sea Swap initiative, the Navy has not developed a systematic method for data collection and analysis, assessment, and reporting of rotational crewing on current surface ships, including the cost-effectiveness of rotational crewing options. Additionally, the Navy has not fully analyzed or systematically assessed rotational crewing options in the analysis of alternatives for surface ships in development, including life-cycle costs. The Atlantic DDG Sea Swap initiative used a comprehensive data- collection and analysis plan for collecting, analyzing, and evaluating data and for reporting results. However, other Navy rotational crewing initiatives have not developed data-collection and analysis plans, collected and analyzed that data, and reported their findings. According to military best practices, developing a data-collection and analysis plan is essential to any experimental initiative by determining what needs to measured, what data will be necessary to collect, and how the data are to be analyzed. A data-collection and analysis plan consists of all data to be collected, the content of the data (type, periodicity, and format), the collection mechanism (automated or nonautomated processes, time frame, location, and method), the data handling procedures, and relationships of the data to the initiative itself. Additionally, data-collection and analysis plans are important to transformational initiatives because they ensure valid and reliable data are captured and understood, and that the analysis undertaken addresses the key issues in the initiative. If properly prepared and implemented, the data-collection and analysis plan aids subsequent analysis efforts and helps analysts maintain the focus needed to transform data collected into information that supports future decisions. In accordance with military best practices, the Atlantic Fleet DDG Sea Swap Experiment Analysis Plan identified areas that needed to be measured (for example, morale and retention, training proficiency, operational performance, operational performance for supporting the Fleet Response Plan, long-term effect on ships’ material condition, cost of implementation, and cost-performance trade offs), specific areas from which to collect the data (Navy reports, messages, and survey data), and how the data were to be analyzed (issues and subissues). Additionally, the Atlantic Fleet DDG Sea Swap plan identified overarching goals and key analysis issues; developed an experimental design; and defined measures and metrics. As a result, the Atlantic Fleet DDG Sea Swap final report was well organized, thoughtfully designed, and provided the reader relevant information based on the original data-collection and analysis plan. By clearly identifying the areas needed for measurement, determining specific issues and subissues to be analyzed for each area, and systematically collecting data in accordance with the original analysis approach, the plan provided analysts and decision makers most of the data needed to conduct comparative analyses and support future decisions. Although the Atlantic Fleet DDG Sea Swap Experiment Analysis Plan was nearly comprehensive it did not include a thorough cost-effectiveness analysis of the Sea Swap alternative, or any forms of rotational crewing. The plan included a marginal-cost analysis that examined shorter-term trade-offs between the Sea Swap concept and more traditional crewing concepts; however, it did not specify a comprehensive cost-effectiveness analysis that would determine the least costly crewing method to satisfy Navy requirements. According to best practices, cost-effectiveness is a method used by organizations seeking to gain the best value for their money and to achieve operational requirements while balancing costs, schedules, performance, and risks. The best value is often not readily apparent and requires an analysis to maximize value. A cost-effectiveness analysis is used where benefits cannot be expressed in monetary terms but, rather, in “units of benefit,” for example, days of forward presence. According to Office of Management and Budget guidance, a comprehensive cost-effectiveness analysis would include a comparison of alternatives, in this case, crewing options, based on a life-cycle cost analysis of each alternative. The plan called for a cost analysis using categories based on the major issues it identified in the plan; however, the plan acknowledges that these costs are limited, and a more detailed cost model is needed so costs that differ between crewing options can be identified and broken out for comparison. Additionally, the plan did not call for an analysis of full life-cycle cost data, although it stated that future rotational crewing concept analyses should consider life-cycle or total ownership costs as a part of examining future force structure options. While the Navy is collecting and compiling some data for the current surface ships involved in rotational crewing initiatives (patrol coastal ships, mine countermeasure ships, and HSV-2 Swift), there are no systematic metrics or methods for collecting and evaluating rotational crewing specific data similar to the Atlantic Fleet DDG Sea Swap Experiment Analysis Plan. According to Navy officials, the Navy routinely collects retention, morale, material condition, training, cost, operational performance, and Fleet Response Plan–related data for all surface ships. Data collection and analysis for surface ships falls under the direction of the Surface Warfare Enterprise, an arm of the Commander, Naval Surface Forces. One of the major tenets of the Surface Warfare Enterprise and its cross-functional teams is to help recapitalize the future Navy by managing with metrics, and reducing the total cost of doing business. To that end, high-ranking Navy officials led by the Commander, Naval Surface Forces, meet monthly to review and discuss the effectiveness of various manning, training, equipping, and maintaining processes. Although much of these data are similar to those collected in the Atlantic Fleet DDG Sea Swap plan, the data are not as comprehensive and are not consistent from initiative to initiative. Additionally, the Surface Warfare Enterprise data collection and analyses did not link to the effectiveness of different crewing alternatives. Currently, there are no standard metrics or systematic methods for collecting rotational crewing– related data from surface ships because the Navy has not developed and promulgated overarching guidance that requires a systematic data- collection, analysis, and reporting methodology. Consequently, the potential value of rotational crewing is unknown and the Navy is hindering its ability to determine optimal crewing concepts for ship classes. Navy surface-ship classes currently under development, the LCS, Joint High Speed Vessel, and the DDG-1000 Zumwalt-class guided missile destroyer, have not fully analyzed or systematically assessed rotational crewing in their analysis of alternatives. Early in the development of a new weapons system, DOD and the Navy require that an analysis of alternatives be completed that identifies the most promising alternatives. The analysis of alternatives process is intended to refine the initial weapon systems concept and requires an evaluation of the performance, operational effectiveness, operational suitability, and estimated costs, including full life-cycle costs, of alternatives that satisfy established capability needs. The analysis of alternatives assesses the advantages and disadvantages of alternatives being considered to satisfy capabilities, including the sensitivity of each alternative to possible changes in key assumptions or variables. In at least three recent surface-ship acquisitions, the Navy has not consistently applied these principles because it did not thoroughly analyze and evaluate rotational crewing options and because the Navy’s acquisition instruction does not explicitly require evaluating rotational crewing in the Navy’s ship analysis of alternatives. However, according to the Navy’s acquisition instruction, all analysis of alternatives should include analysis of doctrine, organization, training, materiel, management, leadership, personnel, and facilities as well as joint implications. An evaluation of rotational crewing alternatives could affect all of these things, including force-structure requirements. A comprehensive evaluation could also show whether rotational crewing meets forward presence requirements with fewer ships and lower life- cycle costs. Additionally, the Navy did not have specific overarching rotational crewing guidance that would require such analysis and assessments. As a result, Navy officials will not have sufficient information to make informed investment decisions affecting future obligations of billions of dollars. The Navy identified rotational crewing as a crewing option for the LCS early in the acquisition process; however, the Navy did not complete any comprehensive analyses of rotational crewing alternatives in the ship’s analysis of alternatives. The LCS analysis of alternatives included assumptions that rotational crewing would be used on the ship; however, the analysis did not identify and assess a range of rotational crewing alternatives. Because the analysis did not identify a range of alternative crewing options the Navy was not in position to assess the relative operational effectiveness, suitability, and life-cycle costs of the rotational crewing alternatives. For example, the Navy did not evaluate and compare the relative forward presence and warfighting capabilities for standard and rotational crewing alternatives and the potential effects on manpower, training, and facilities. Without adequately analyzing and systematically assessing different rotational crewing alternatives in the analysis of alternatives, the Navy was not able to determine the optimal crewing alternative for fulfilling its operational needs and maximizing returns on investment. Additionally, without considering rotational crewing options as part of the analysis of alternatives, cost-effective force structure assessments are incomplete. The Joint High Speed Vessel, a ship based on the operational successes of other high-speed surface ships, including the HSV-2 Swift, did not include rotational crewing in its analysis of alternatives despite highly successful experiences with rotational crews on the Swift, an explicit need for forward presence, and its classification as a high-demand, low-density asset. The Swift has employed Blue-Gold rotational crewing while conducting a range of missions, including experimentation, humanitarian operations, and Global Fleet Station deployments. According to focus groups, HSV-2 Swift sailors praised the predictability of the operating cycle and Blue-Gold rotational crewing. Additionally, Fleet Commanders and the commanding officers of the HSV-2 Swift Blue and Gold crews provided positive feedback on the Swift mission performance. High demand for the ship and its capabilities has been met because rotational crewing enabled the ship to maintain a high operational availability and a sustained forward presence. The Joint High Speed Vessel analysis of alternatives considered some data and specifications from the Swift design and operational experiences. However, the Joint High Speed Vessel analysis of alternatives does not include any discussion of the Swift’s rotational crewing experiences, despite their successes with maintaining a very high operational availability. In the analysis of alternatives, the Joint High Speed Vessel force structure requirements and basing options are driven by forward presence and the need for critical response time, but rotational crewing was not included as an option that may increase Joint High Speed Vessel forward presence. During the analysis of alternatives for the DDG-1000 guided missile destroyer, rotational crewing was not thoroughly analyzed despite statements by Navy officials early in the acquisition process and in the original operational requirements document that linked rotational crewing to the ship. The analysis of alternatives for the DDG-1000 compared the effects of rotational crewing and traditional crewing on the number of ships required to generate forward presence requirements. The evaluation showed that using rotational crewing alternatives, in place of the traditional single crew approach, produces a higher forward presence with fewer ships. Although the analysis of alternatives acknowledged that rotational crewing met forward presence requirements, while requiring fewer ships, the analysis of alternatives omitted further analyses of rotational crewing for DDG-1000. Furthermore, the analysis of alternatives addressed the rotational crewing concept, but did not analyze the effect of different rotational crewing schemes on force structure, training, materiel, and other aspects that would affect overall life-cycle costs. With a total of seven planned ships, the DDG-1000 destroyer meets the high-demand, low- density benchmark for rotational crewing recommended by Naval Surface Forces in the Atlantic Fleet DDG Sea Swap report. According to Navy officials, the Navy has no plans to utilize rotational crewing on the DDG- 1000, despite a lack of thorough analyses and the acknowledgement that rotational crewing meets operational requirements with the use of fewer ships. Without analyzing the costs and benefits of rotational crewing alternatives, as compared to the traditional single crewing approach, the Navy will not be able to make informed decisions about DDG-1000 procurements and future force structure. Lastly, the analysis of alternatives for the next generation guided missile cruiser, CG(X), is currently in the review process and had not been released as of April 2008. Navy officials have identified the CG(X) ship as a good candidate to be rotationally crewed. According to DOD documentation, the analysis of alternatives for the CG(X) ship will analyze and document major sustainment alternatives including variations in service life, reliability, operating profiles, maintenance concepts, manpower and crewing concepts (including crew rotation and Sea Swap), and other relevant sustainment factors to fully characterize the range of sustainment options. Although it is planned that the analysis of alternatives for CG(X) will analyze different crewing options, a Naval Sea Systems Command official could not provide us any information as to the content of the study until it is completed. The Navy has taken some actions to collect and use lessons-learned from rotational crewing experiences. For example, the Atlantic Fleet DDG Sea Swap initiative developed and implemented a robust lessons-learned plan. Despite some progress in collecting and sharing lessons learned within individual ship communities, the Navy’s efforts in many cases were not systematic and did not use the Navy Lessons Learned System. Additionally, the Navy has not developed overarching processes for the systematic collection and dissemination of lessons learned pertaining specifically to rotational crewing. The Navy has taken actions to collect, disseminate, and capitalize on lessons learned pertaining to rotational crewing within individual commands, using methods both formal and informal. For example, as part of the Atlantic Fleet DDG Sea Swap initiative, the Navy implemented a robust lessons learned plan to actively collect feedback from destroyer crews. The plan outlined a formal lessons learned process and established a team to collect, review, and analyze lessons learned and ensure that they were incorporated into policies and procedures. The team systematically collected lessons learned from destroyer rotational crews by, among other things, conducting interviews with crew members, reviewing ship message traffic, and examining turnover observation reports. According to the Atlantic Fleet DDG Sea Swap initiative report, draft lessons-learned submissions underwent a well-defined review process to ensure quality, completeness, and consistency. Lessons learned that were of immediate utility were disseminated to Sea Swap initiative crews. Those relating to management and oversight were vetted with the goal of supporting future rotational crewing decision making and policy development. In addition, the Atlantic Fleet DDG Sea Swap initiative leveraged lessons learned from the 2002–2004 Pacific Fleet Destroyer Sea Swap effort, incorporating them into the development of operational plans. Other ship communities, using less systematic processes, have also captured and shared lessons learned within their communities. For example, the mine warfare community compiled lessons learned following a crew turnover in February 2007, when this community began using a “Blue-Gold” rotational crewing alternative. The guided missile submarine community, in planning for its implementation of rotational crewing, developed lessons learned from a crew rotation exercise in Hawaii. These lessons learned were disseminated to command officials and other ships within this community and also can be accessed from an internal submarine forces Web site’s lessons-learned page. In addition, LCS officials stated that the LCS community shares lessons learned within the command through direct feedback from crew members and in class squadron, cross-functional team, and Oversight Board meetings. These meetings provide a forum to identify potential barriers and propose actions to resolve them, resulting in the development of lessons learned. The LCS community also has conducted a series of crew swap exercises to collect lessons learned regarding logistical support requirements in forward-deployed locations. Officials stated that the lessons learned would be incorporated into LCS standard operating procedures. Lessons learned were shared between individual ship communities through direct interaction and, on a more limited basis, the Navy Lessons Learned System. Individual ship communities collected and shared lessons learned primarily through direct interaction, such as meetings and site visits. Table 2 highlights examples of direct actions taken to collect and leverage lessons learned from rotational crewing experiences between ship communities. In addition, lessons learned were collected and disseminated through the Navy Lessons Learned System, which is a central repository for the collection and dissemination of lessons learned and a means to correct problems identified from fleet operations. The Atlantic Fleet DDG Sea Swap initiative lessons-learned plan explicitly incorporated into its goals the submission of lessons learned into this system. Twenty-six lessons learned were recorded in the system, which can be accessed by Navy personnel ashore and at sea through a classified Internet site. Despite the Navy’s progress in collecting and sharing lessons learned within ship communities, its efforts in many cases were not systematic and did not use the Navy’s Lessons Learned System. Instead, the development and sharing of lessons learned relied on informal processes that are left to individual ship commands, and thus were not done consistently across all ship communities that use rotational crewing. For example, the mine warfare and patrol coastal communities lack formal written processes to collect lessons learned related specifically to rotational crewing, according to command officials. Focus group responses from both these communities indicate that efforts to gather lessons learned from crewmembers and communicate them up the chain of command have been inconsistent. A mine warfare community official stated that the collection of lessons learned is largely dependent on the commanding officer and is typically shared by word of mouth or e-mail. Furthermore, while the LCS and guided missile submarine communities have taken steps to collect and capitalize upon lessons learned before they operationally deploy, officials stated that these communities have yet to develop formal processes—such as written procedures or data-collection plans—to gather and share lessons learned specifically related to rotational crewing within their ship communities. LCS officials stated that their community is small at present, allowing lessons learned to be effectively shared informally, but acknowledged the need for formal processes in the future. Without formal processes, the LCS and guided missile submarine communities may be less likely to systematically collect lessons learned—similar to the mine warfare and patrol coastal communities—and therefore, miss opportunities to improve rotational crewing implementation. While ship communities have collected lessons learned among individual commands through direct interaction, such as meetings and site visits, they have not fully used the Navy Lessons Learned System to enhance knowledge sharing. As of October 30, 2007, lessons learned directly related to rotational crewing have yet to be recorded in the Navy Lessons Learned System by the mine warfare, patrol coastal, HSV-2 Swift, guided missile submarine, and LCS communities. In addition, ship command officials from the mine warfare, patrol coastal, and LCS commands have indicated that they have not used the Navy Lessons Learned System to access lessons learned pertaining to rotational crewing. The following are examples where difficulties experienced by current rotational crewing efforts may have been addressed in previous lessons learned: Issues such as personnel gaps and training deficiencies, lack of accountable inventory control measures during the crew turnovers, mitigating ship configuration differences, and the effect of limited port visits on crew morale were identified as problem areas in focus group discussions with mine warfare, patrol coastal, and guided missile submarine rotational crews. However, lessons learned recorded by the Atlantic Fleet DDG Sea Swap initiative in the Navy Lessons Learned System had already addressed these issues. As previously mentioned in this report, rotational crewing efforts have been implemented in separate, disjointed efforts across ship communities without top-down leadership because the Navy has not established a management team to oversee and integrate rotational crewing efforts. However, lessons learned from the Atlantic Fleet DDG Sea Swap initiative recommended the creation of a management team to, among other things, help define performance measures for rotational crewing efforts and ensure that lessons learned are documented and incorporated into existing policies and procedures. The LCS community is trying to resolve barriers in transportation logistics that are addressed by lessons learned from the guided missile submarine community’s exercise to help solve transportation logistics issues for forward-deployed crew turnovers. However, guided missile submarine community officials stated that they have not entered lessons learned from their rotational crewing experiences into the Navy Lessons Learned System. Consequently, the LCS community has not been able to capitalize on these lessons learned in its efforts to address transportation logistics issues. Officials from both the guided missile submarine and LCS communities stated that their experiences are likely to be pertinent to current and future ship classes and recognized the importance of recording lessons learned in the system to benefit the rest of the Navy. As the above examples demonstrate, by not fully utilizing the Navy Lessons Learned System, the Navy may continue to experience difficulties similar to those that previously recorded lessons learned sought to correct. Until the system is used to leverage past lessons learned, ship communities may miss opportunities to more effectively plan and conduct crew rotations, and may be unable to potentially prevent problems that were addressed in past rotational crewing experiences. Lessons learned are not developed and shared consistently across all ship communities that use rotational crewing because the Navy has not developed overarching processes to help ensure that ship commands systematically collect and disseminate lessons learned from their rotational crewing experiences. While the Chief of Naval Operations instruction for the Navy Lessons Learned System establishes a process for the collection, validation, and distribution of unit feedback, Navy Lessons Learned Program officials stated that the collection and sharing of lessons learned is not required and, instead, is left to the discretion of individual ship commands. Nonetheless, the Navy Warfare Development Command, which is responsible for administering the Navy’s system, has launched an initiative to actively collect lessons learned for major exercise and events, using, for example, a lessons learned team and data-collection plan to collect information. Navy Warfare Development Command officials stated that, with the proper resources, it would be possible to employ similar active collection methods specifically for rotational crewing efforts. However, aside from the Atlantic Fleet DDG Sea Swap initiative, the Navy has not developed processes to guide the active and systematic collection of lessons learned pertaining specifically to rotational crewing. The initiative’s concept of operations stressed the importance of high- quality lessons learned in implementing new crewing concepts. It also expressly incorporated the Navy Lessons Learned System into lessons learned processes. However, these processes applied only to the Atlantic Fleet DDG Sea Swap initiative and were not used in other ship communities. According to the concept of operations, the risks of not taking a proactive approach to lessons learned include failing to document policy changes and preserve process improvements, which is important given the high turnover of personnel during the time frame of the initiative. Similar turnover issues may apply to other ship communities that employ rotational crewing. Without overarching guidance to promote the systematic collection and dissemination of lessons learned across all ship communities, knowledge about rotational crewing may be lost and crews will be unable to benefit from the Navy’s collective experiences. Given the fiscal environment facing the Navy and the rest of the federal government, decision makers must make investment decisions that maximize return on investment at the best value for the taxpayer. Rotational crewing can be a viable alternative to mitigate affordability challenges in the Navy while supporting a high pace of operations and an array of mission requirements. As a result, the Navy must be in a better position to make informed decisions about the potential for applying rotational crewing to current and future ships. As new ships become increasingly expensive it is imperative that rotational crewing alternatives are fully considered early in the acquisition process when the department conducts analysis of alternatives. Without comprehensive analysis of alternatives, cost-effective force structure assessments are incomplete and the Navy does not have a complete picture of the number of ships it needs to acquire. While the Navy has made progress in refining rotational crewing concepts, the Navy has not taken all of the steps that would be helpful to effectively manage rotational crewing efforts and assess crewing options for current and future ships. The Navy has made significant progress since our November 2004 report on rotational crewing. For example, the Atlantic Fleet DDG Sea Swap benefited from an implementation team that developed and implemented a nearly comprehensive experiment analysis plan, promulgated a detailed concept of operations, and recorded and disseminated lessons learned. Further, several ship commands have promulgated their own crew-exchange instructions and concepts of operations. Progress has been limited, however, to specific rotational crewing efforts and has not been systematically integrated across the Navy. Without a comprehensive management approach that includes top-level leadership and an implementation team to guide and assess rotational crewing, the Navy can not be assured that rotational crewing efforts are coordinated and integrated as it attempts to lead a successful transformation of its ship-crewing culture. Further, without an overarching instruction to guide rotational crewing initiatives, the Navy may limit the potential for successfully managing, implementing, and evaluating rotational crewing as a transformational means of increasing capabilities in a cost-effective manner. The Navy has also not developed a systematic approach to analyzing rotational crewing alternatives or collecting and sharing related lessons learned. Without a systematic approach to analyzing rotational crewing alternatives on current and future ships, the Navy may not be able to determine if particular alternatives are successful in, or have the potential for, fulfilling operational needs and maximizing return on investment. As a result, the Navy may not develop and procure the most cost-effective mix of ships to meet operational needs. Additionally, by not systematically collecting and using lessons learned from rotational crewing experiences, the Navy risks repeating mistakes and could miss opportunities to more effectively plan and conduct crew rotations. To facilitate the successful transformation of the Navy’s ship-crewing culture, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following three actions: assign clear leadership and accountability for managing rotational establish an overarching implementation team to provide day-to-day management oversight of rotational crewing efforts, coordinate and integrate efforts, and apply their results to the fleet; and develop and promulgate overarching guidance to provide the high-level vision and guidance needed to consistently and effectively manage, implement, and evaluate all rotational crewing efforts. To ensure effective management, implementation, and evaluation of rotational crewing efforts, we recommend that the Commander, U.S. Fleet Forces, direct the development and promulgation of concepts of operations by all ship communities using or planning to use rotational crewing, that include a description of how rotational crewing may be employed and the details of by whom, where, and how it is to be accomplished, employed, and executed. To ensure that the Navy assesses the potential of different rotational crewing alternatives for improving performance and reducing costs for ship classes, we recommend that the Secretary of Defense direct the Secretary of the Navy, under the purview of the implementation team, to take the following two actions: develop a standardized, systematic method for data collection and analysis, assessment, and reporting on the results of rotational crewing efforts, including a comprehensive cost-effectiveness analysis that includes life-cycle costs, for all rotational crewing efforts; and require, as part of the mandatory analysis of alternatives in the concept refinement phase of the defense acquisition process, assessments of potential rotational crewing options for each class of surface ship in development, including full life-cycle costs of each crewing option. To ensure that the Navy effectively leverages lessons learned, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following two actions: develop overarching guidance to ensure the systematic collection and dissemination of lessons learned pertaining specifically to rotational crewing; and incorporate components of the lessons-learned approach outlined in the Atlantic Fleet DDG Sea Swap Concept of Operations, including, among other things, establishing a lessons-learned team, developing a data-collection plan, and increasing use of the Navy Lessons Learned System. Because DOD disagreed with our recommendations dealing with assigning clear leadership, establishing an implementation team, developing and promulgating overarching guidance, and improving the use of lessons learned, we are suggesting that Congress consider requiring the Secretary of Defense to direct the Secretary of the Navy to assign clear leadership and accountability for managing rotational crewing establish an overarching implementation team to provide day-to-day management oversight of rotational crewing efforts, coordinate and integrate efforts, and apply their results to the fleet; develop and promulgate overarching guidance to provide the high-level vision and guidance needed to consistently and effectively manage, implement, and evaluate all rotational crewing efforts; develop overarching guidance to ensure the systematic collection and dissemination of lessons learned pertaining specifically to rotational crewing; and incorporate components of the lessons-learned approach outlined in the Atlantic Fleet DDG Sea Swap Concept of Operations, including, among other things, establishing a lessons-learned team, developing a data- collection plan, and increasing use of the Navy Lessons Learned System. Congress should also consider requiring Secretary of Defense to direct the Secretary of the Navy to report to Congress on its progress when the President’s budget for fiscal year 2010 is submitted to Congress. DOD, in its comments on a draft of this report, partially agreed with our three recommendations regarding concepts of operations, data collection and analysis, and rotational crewing assessments during surface-ship analysis of alternatives. DOD disagreed with our five other recommendations that would assign clear leadership and accountability for managing rotational crewing efforts; establish an overarching implementation team; develop and promulgate overarching guidance to provide the high-level vision and guidance needed to consistently and effectively manage, implement, and evaluate all rotational crewing efforts; ensure the systematic collection and dissemination of lessons learned pertaining specifically to rotational crewing; and incorporate components of the lessons-learned approach outlined in the Atlantic Fleet DDG Sea Swap Concept of Operations. DOD stated that measures are already in place to manage ship and submarine manning, training, and equipping. However, as discussed below, we do not believe that the Navy’s actions go far enough in providing leadership, management, and guidance in transforming the Navy’s surface-ship-crewing culture; collecting data, analyzing, reporting, and integrating the results of different rotational crewing efforts; and in documenting and acting on lessons it has learned during implementation of different rotational crewing alternatives. As such, the Navy may be missing opportunities to improve its transformational capabilities and cost-effectively increase surface-ship operational availability. Therefore, we are suggesting that Congress consider requiring the Secretary of Defense to direct the Secretary of the Navy to implement our recommendations and report to Congress on its progress when the President’s budget for fiscal year 2010 is submitted to Congress. The department also provided technical comments which were incorporated as appropriate. DOD’s comments are reprinted in their entirety in appendix III. Our specific comments follow. DOD disagreed with our recommendation that the Navy facilitate the successful transformation of its ship-crewing culture by assigning clear leadership and accountability for managing rotational crewing efforts. DOD stated that the Department of the Navy has existing clear leadership and accountability for the manning of ships and submarines and that this management structure includes oversight and leadership within both operational and administrative chains of command. It further noted that additional organizational structure dedicated to rotational crewing is unnecessary and potentially counterproductive. We have identified several key management practices at the center of implementing transformational programs, which include ensuring that top leadership drives the transformation. While the Navy has administrative and operational management structures, there is not a designated leader to manage all rotational crewing efforts in the Department of the Navy. As a result, numerous separate rotational crewing efforts continue with little, if any, top-down leadership and coordination, and no team or steering group exists within the Navy to manage the transformation of the Navy’s ship- crewing culture. We continue to believe that our recommendation merits further action and have included this issue in a matter for congressional consideration. DOD disagreed with our recommendation that the Navy should establish an overarching implementation team to provide day-to-day management oversight of rotational crewing efforts, coordinate and integrate efforts, and apply their results to the Fleet. DOD stated that the Navy already exercises day-to-day management to support ship and submarine manning and training and that an implementation team dedicated to rotational crewing is unnecessary and potentially counterproductive. We reported in 2003 that key practices for successful transformations include that an implementation team should be responsible for the day-to-day management of transformation to ensure various initiatives are integrated. Although the Navy has established implementation teams for selected rotational crewing initiatives and has other existing management structures, it has not established an implementation team for managing all rotational crewing programs to ensure successful transformation of the Navy’s ship-crewing culture. As a result, the Navy does not have a dedicated team or steering group that can devote focused attention, provide a communication structure, apply lessons learned, and execute other key practices that would build on its successful efforts and ensure consistent management of rotational crewing across the fleet. We continue to believe that our recommendation merits further action and have included this issue in a matter for congressional consideration. DOD disagreed with our recommendation that the Navy should develop and promulgate overarching guidance to provide the high-level vision and guidance needed to consistently and effectively manage, implement, and evaluate all rotational crewing efforts. DOD stated that the Navy has sufficient guidance in place to provide the high-level vision necessary to manage ship and submarine manning. As discussed in the report, the Navy has developed guidance for some rotational crewing efforts. However, the development, dissemination, and implementation of rotational crewing guidance has been inconsistent and fragmented. As noted in this report, an overarching directive for rotational crewing would provide essential and consistent Navy-wide policy and guidance on rotational crewing efforts; establish leadership, delegate authority, and assign responsibilities; assign missions, functions, or tasks; and establish a reporting requirement. DOD also stated that, although rotational crewing includes some unique crew considerations and support requirements, the training and support of sailors involved in rotational crewing are little different than those for sailors in the standard crewing process. We agree that the goals and objectives of ship and crew training and support are little different between rotational and standard crews. However, as shown in some of the concepts of operations and in the Navy Lessons Learned System, crew exchange guidance for rotational crewing and the execution of training and support for rotational crewing efforts can provide many unique challenges for sailors, in addition to the challenge of adapting sailors to a change in ship-crewing culture. We continue to believe that our recommendation merits further action and have included this issue in a matter for congressional consideration. DOD partially agreed with our recommendation that the Commander, U.S. Fleet Forces, direct the development and promulgation of concepts of operations by all ship communities using or planning to use rotational crewing. DOD stated that the Navy already uses appropriate concepts for fleet operations and, when or if additional rotational crewing is warranted, the Navy will issue specific guidance, instructions, and concepts of operations. While we strongly support the Navy’s efforts to develop concepts of operations that guide fleet rotational crewing efforts, its efforts have been inconsistent. For example, ship communities, such as patrol coastal and mine warfare, have experienced implementation challenges because they lacked key information such as the roles and responsibilities of individual decision makers, managers, and leaders involved in rotational crewing execution. For these reasons, we continue to believe that our recommendation merits further action and that the Commander, U.S. Fleet Forces, should direct the development and promulgation of concepts of operations by all ship communities using or planning to use rotational crewing, using the Atlantic Fleet DDG Sea Swap Concept of Operations as a model for other rotational crewing initiatives. DOD partially agreed with our recommendation that the Navy develop a standardized, systematic method for data collection and analysis, assessment, and reporting on the results of rotational crewing efforts, including a comprehensive cost-effectiveness analysis that includes life- cycle costs, for all rotational crewing efforts. DOD stated that the Navy has no plans for broad general application of rotational crewing to all ship classes, and a standing implementation team and data collection is unnecessary. DOD also stated that the Navy will conduct appropriate studies to determine if and when additional rotational crewing is appropriate based on cost effectiveness. While we support DOD’s efforts to proactively conduct studies, based on cost effectiveness, to determine if and when rotational crewing is appropriate to use on surface ships, we urge the Navy to take steps to develop a standardized, systematic method for collecting data and analyzing, assessing, and reporting results, including cost-effectiveness analysis, on all rotational crewing efforts, including those currently underway. As discussed in the report, the Surface Warfare Enterprise is collecting data from surface ships, including those participating in rotational crewing initiatives; however, the data they collect is not consistent from initiative to initiative, and none of the data are tied to the effectiveness of different crewing schemes or rotational versus traditional crewing schemes. DOD also stated that the LCS is the only new ship class that currently plans on implementing rotational crewing. While we agree that the LCS is the only new ship class with definitive plans to rotationally crew its ships, several other future ship classes, including the Joint High Speed Vessel, DDG-1000, and CG(X), still fit the requirements of potential rotationally crewed ships, as described by Fleet Forces Command. Therefore, we continue to believe, as we have recommended, that DOD should direct the Navy to develop a standardized, systematic method for data collection and analysis, assessment, and reporting on the results of rotational crewing efforts, including a comprehensive cost-effectiveness analysis that includes life-cycle costs, so that the potential value of rotational crewing will be known and the Navy will be able to determine optimal crewing concepts for current and future ship classes. DOD partially agreed with our recommendation that the Navy require, as part of the mandatory analysis of alternatives in the concept refinement phase of the defense acquisition process, assessments of potential rotational crewing options for each class of surface ship in development, including full life-cycle costs of each crewing option. DOD agreed that all feasible crewing options should be considered during the concept refinement phase of the defense acquisition process. Ships determined to have a potential advantageous rotational crewing application will assess and include this option among the various crewing alternatives reported by the analysis of alternatives. We support DOD’s assessment that all feasible rotational crewing options should be considered during the concept refinement phase in the analysis of alternatives. DOD disagreed with our recommendation that the Navy develop overarching guidance to ensure the systematic collection and dissemination of lessons learned pertaining specifically to rotational crewing. DOD stated that the Navy already uses “lessons learned” tools as part of the rotational crewing and that further guidance to use these tools is not needed. We support the progress the Navy has made in collecting lessons learned and documenting these lessons in the Navy Lessons Learned System. However, as discussed in the report, most ship communities did not submit or draw on lessons in the Navy Lessons Learned System to enhance knowledge sharing or learn from others’ experiences. For example, the mine warfare, patrol coastal, LCS, and guided missile submarine communities lack formal written processes to collect lessons learned related specifically to rotational crewing. Without guidance to ensure collection and dissemination of lessons learned, the Navy unnecessarily risks repeating past mistakes and could miss opportunities to more effectively plan and conduct crew rotations. Therefore, we continue to believe that our recommendation merits further action and have included this issue in a matter for congressional consideration. DOD disagreed with our recommendation that the Navy incorporate components of the lessons-learned approach outlined in the Atlantic Fleet DDG Sea Swap Concept of Operations, including, among other things, establishing a lessons-learned team, developing a data-collection plan, and increasing use of the Navy Lessons Learned System. DOD stated that the Navy already relies on data collection and analysis from ships and that requiring already implemented rotational crewing efforts to adopt experimental data collection procedures is unnecessary. DOD further stated that procedures are already in place for crews, rotational or standard, to provide data to the chain of command to identify improvements. As discussed in the report, the Navy has taken some actions to collect, disseminate, and capitalize on lessons learned from its crew rotation experiences. However, despite some progress in collecting and sharing lessons learned within individual ship communities, the Navy’s efforts in many cases were not systematic and did not use the Navy Lessons Learned System. Instead, the development and sharing of lessons learned relied on informal processes that are left to individual ship commands, and thus were not done consistently across all ship communities that use rotational crewing. The initiative ensured documentation of lessons learned by outlining a requirement and a process in the Atlantic Fleet DDG Sea Swap Concept of Operations. The concept of operations also noted that the risks of not taking a proactive approach to lessons learned include failing to document policy changes and preserve process improvements, which is important given the high turnover of personnel during the time frame of the initiative. We believe that our recommendation merits further action and have included this issue in a matter for congressional consideration. We are sending copies of this report to the Secretary of Defense; the Secretary of the Navy; the Chairman, Joint Chiefs of Staff; and the Director, Office of Management and Budget. We will also make copies available to other congressional committees and interested parties on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4402 or stlaurentj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Nuclear-powered Ohio-class ballistic missile submarines, also known as Trident submarines, provide the sea-based leg of the triad of U.S. strategic deterrent forces and the most survivable nuclear strike capability. The ballistic missile submarine force consists of 14 submarines—6 homeported in Kings Bay, Georgia, and 8 in Bangor, Washington. Each submarine has about 15 officers and 140 enlisted personnel. To maintain a constant at-sea presence, a Blue-Gold rotational crewing concept is employed on these submarines. Each ship has a “Blue” Crew and a “Gold” Crew, each with its own respective ship command. The ship deploys with one of these crews for 77 days, followed by a 2- to 3-day crew turnover and a 35-day maintenance period. For example, after a Blue Crew deployment, the Gold Crew takes command of the boat following a 3-day turnover process. The Blue Crew assists the Gold Crew in conducting maintenance repairs. During the Gold Crew’s patrol, the Blue Crew stands down and enters a training cycle in its homeport. The first four of the Ohio-class Trident fleet ballistic missile submarines are being converted to nuclear-powered guided missile and special- operations submarines. Two submarines will be homeported in Kings Bay, Georgia, and two will be homeported in Bangor, Washington. Each submarine has about 15 officers and 144 enlisted personnel and can carry up to 66 Special Operations Forces personnel. According to Navy officials, in order to provide greater operational availability, Blue-Gold rotational crewing is employed on these submarines. Each submarine has a “Blue” crew and a “Gold” crew and each crew has its own respective command. The operating cycle consists of four alternating Blue and Gold crew deployments averaging about 73 days followed by a homeport maintenance period of 100 days. Two- to 3- day crew turnovers will take place overseas at sites such as Guam and Diego Garcia and coincide with a 23-day voyage-repair period. The Arleigh Burke–class guided missile destroyers provide multimission offensive and defensive capabilities, operating independently or as part of other naval formations. The guided missile destroyer force consists of 52 ships—with primary homeports in San Diego, California, and Norfolk, Virginia. Each destroyer has about 24 officers and 250 enlisted personnel. The Commander, Naval Surface Force, U.S. Atlantic Fleet, conducted a Sea Swap initiative during 2005–2007, as a follow-on to the 2002–2004 proof-of-concept demonstration conducted by the Commander, Naval Surface Force, U.S. Pacific Fleet. Both Sea Swap experiments involved three guided missile destroyers and three crews, with crews rotating every 6 months to the forward-deployed ship. The Cyclone-class patrol coastal ships are small Navy vessels used to conduct surveillance and shallow-water interdiction operations in support of maritime homeland security operations and coastal patrol of foreign shores. The patrol coastal force consists of eight ships—five homeported in Bahrain and three in Little Creek, Virginia. Five additional ships will be returned from loan to the U.S. Coast Guard over the next 3 years. Each patrol coastal has about 4 officers and 26 enlisted personnel. According to Navy officials, the Navy is using a Horizon rotational crewing model on patrol coastal ships in which 13 crews rotate among the eight ships in order to increase operation days in the Arabian Gulf. Each crew spends 6 months deployed to Bahrain and then 10 months training in homeport in Virginia. The Avenger-class mine countermeasure ships are mine hunter-killers capable of finding, classifying, and destroying moored and bottom mines. The mine countermeasure ship force consists of 14 ships—8 homeported in Ingleside, Texas, 4 homeported in Bahrain, and 2 homeported in Sasebo, Japan. Each mine countermeasure ship has about 8 officers and 76 enlisted personnel. According to Navy officials, in order to increase operation days in the Arabian Gulf, the Navy utilizes a Blue-Gold-Silver rotational crewing model on mine countermeasure ships. A “Blue” crew and a “Gold” crew are assigned to each of the four ships in Bahrain and four of the eight ships in Texas. The “Blue” and “Gold” crews rotate by spending 4 months deployed in Bahrain and then 4 months back in Texas. Four remaining crews in Texas make up “Silver” crews assigned to the other four ships in Texas. The HSV-2 Swift is a high-speed wave-piercing aluminum-hulled catamaran that was acquired as an interim mine warfare command and support ship and a platform for conducting joint experimentation, including Littoral Combat Ship program development. The Swift has about 45 crew members (officer and enlisted). The Navy leased and accepted delivery of the Swift from the builder, Bollinger/Incat, in August 2003. The Swift utilizes Blue-Gold crewing to maximize operational availability. The “Blue” crew is based in Ingleside, Texas, and the “Gold” crew in Little Creek, Virginia. Each crew operates the ship for about 117 days, with 3–4 day crew exchanges occurring wherever the ship happens to be at the end of that period whether homeport or at overseas locations. The Littoral Combat Ship is a new class of Navy surface combatants that is intended to be fast, agile, and tailorable to the specific missions of antisurface warfare, antisubmarine warfare, and mine warfare in heavily contested littoral and near-shore waters. Interchangeable mission packages will be used to assure access to the littorals for Navy forces in the face of threats from surface craft, submarines, and mines. The Navy plans to build 55 of these ships over the life of the program, as well as 24 mine-warfare mission packages, 24 surface-warfare mission packages, and 16 anti-submarine-warfare mission packages. The Littoral Combat Ship core crew, which will man the seaframe, will have 40 crewmembers while each mission package will have a maximum of 15 personnel onboard, and the aviation detachment will have 23. In order to increase operational availability, the Navy is exploring various rotational crewing options. The first two ships now under construction will utilize the Blue-Gold rotational crewing model. As more ships are commissioned, the Navy plans to use a rotational crewing concept similar to the one employed on mine warfare ships. Specifically, the Navy envisions using four crews to operate three ships based in the continental United States, of which one ship would be forward-deployed at any given time. Developed under the DD(X) destroyer program, the DDG-1000 Zumwalt is the lead ship of a class of next-generation multimission destroyers tailored for land attack and littoral dominance. The Zumwalt-class will provide forward presence and deterrence, and operate as an integral part of joint and combined expeditionary forces. The ship has not been built, but the first ship is planned for delivery to the Navy in 2013. The planned procurement of the DDG-1000 will be completed by fiscal year 2013 with a total of seven ships. Current DDG-1000 plans anticipate a crew size of 148 people including a 28 person aviation detachment. The Navy currently plans to utilize the standard one-ship, one-crew model on the DDG-1000. However, in the Atlantic Fleet DDG Sea Swap report, Fleet Forces Command notes that rotational crewing models are being considered for the DDG-1000, likely due to their role as a high-demand, low-density asset. The Joint High Speed Vessel will provide combatant commanders high- speed intratheater sealift mobility with inherent cargo handling and the capability of transporting personnel, equipment, and supplies over operational distances in support of maneuver and sustainment operations. The ship has not been built, but the first ship is planned for delivery to the Navy in 2011. According to Navy officials, there are eight ships in the current program of record—3 Navy and 5 Army. Current Navy plans anticipate a crew size of about 40 persons. Naval Sea Systems Command officials explained that crewing alternatives for the Joint High Speed Vessel are still under development. Officials also explained that the Navy has not selected a material solution for the Joint High Speed Vessel and is in source selection for multiple concept designs. The Navy is currently developing technologies and studying design options for a planned new air- and missile-defense surface combatant, the CG(X) cruiser. The Navy is currently reviewing an analysis of alternatives to determine what capabilities and design the CG(X) will have, including nuclear power options. The Navy intends to begin buying the CG(X) cruiser in 2011 and amass a total ship force of 19 ships. Crew size has not been determined. Naval Sea Systems Command officials explained that crewing alternatives for the CG(X) are still under development. Officials also explained that the Navy has not selected a material solution for CG(X), as it is premilestone A and the Analysis of Alternatives is in review within the Navy. To assess the extent to which the Navy employed a comprehensive management approach to coordinate and integrate rotational crewing efforts and transform its ship-crewing culture, we interviewed officials from the Department of the Navy, Fleet headquarters, and the private sector; reviewed relevant Navy practices and speeches by Navy leadership; received briefings from relevant officials; and compared the Navy’s approach with our prior work on best practices for managing and implementing organizational transformations. To identify these best practices, we reviewed our prior work including GAO, Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. We reviewed key documents including the Littoral Combat Ship Platform Wholeness Concept of Operations and the U.S. Fleet Forces DDG Sea Swap Initiative Final Report. We also conducted focus groups with crews participating in rotational crewing initiatives to obtain views, insights, and feelings of Navy submarine and ship officers and enlisted personnel, as well as to determine the extent to which the Navy had transformed its ship-crewing culture. In addition, we examined key documents from the Navy’s Fleet Training area to demonstrate the architecture of an overarching implementation team. To assess the extent to which the Navy has developed, disseminated, and implemented guidance for rotational crewing on surface ships, we interviewed officials from the U.S. Fleet Forces Command; Commander, Naval Surface Forces; and Commander, Naval Submarine Forces. We also interviewed officials from the Patrol Coastal Class Squadron; Mine Countermeasures Squadrons One, Two, and Three; Submarine Group Trident; HSV-2 Swift; and the Littoral Combat Ship Class Squadron. In addition, we obtained and reviewed exchange of command guidance issued by Commander, Naval Surface Forces, and its subordinate commands, including the Commander, Mine Warfare Command, Commander Mine Countermeasures Squadron Two, Patrol Coastal Class Squadron, and Regional Support Organization Norfolk that provided oversight of the Atlantic Fleet DDG Sea Swap ships and crews. We also obtained and reviewed concept of operations for the Atlantic Fleet DDG Sea Swap, the Littoral Combat Ship, and guided missile submarine program. To assess the potential usefulness and application of concepts of operations we reviewed best practices guidance in the Navy, Department of Defense, and the Department of Transportation. To assess the extent to which the Navy has analyzed, evaluated, and assessed potential rotational crewing efforts for current and future ships, we interviewed officials from the Department of the Navy, Fleet headquarters, and the private sector; and received briefings from relevant officials. We reviewed and analyzed the Atlantic Fleet DDG Sea Swap Experiment Analysis Plan and the U.S. Fleet Forces DDG Sea Swap Initiative Final Report. We also reviewed the analysis of alternatives guidance contained in DOD and Navy acquisition instructions and the Defense Acquisition Guidebook. We also obtained and analyzed the analysis of alternatives for several ships in development, including the DDG-1000, Littoral Combat Ship, and Joint High Speed Vessel. To determine military best practices for data collection and evaluation, we reviewed several key documents including the Guide for Understanding and Implementing Defense Experimentation and the Navy Warfare Development Command’s Analysis in Sea Trial Experimentation, and prior GAO reports. In addition, we conducted focus groups with crews participating in rotational crewing initiatives to obtain views, insights, and feelings of Navy submarine and surface-ship officers and enlisted personnel, as well as to determine the extent to which the Navy collects, analyzes, and evaluates rotational crewing data. To assess the extent to which the Navy has systematically collected, disseminated, and capitalized on lessons learned from past and current rotational crewing experiences, we interviewed officials from the following Navy commands: Navy Warfare Development Command, Naval Surface Forces Command, Mine Countermeasure Class Squadron, Patrol Coastal Class Squadron; from the guided missile submarine, HSV-2 Swift, and LCS communities; and we conducted 19 focus group meetings with rotational crews. We also obtained and reviewed the Atlantic Fleet DDG Sea Swap Experiment Analysis Plan, the Atlantic Fleet DDG Sea Swap Concept of Operations, the U.S. Fleet Forces DDG Sea Swap Initiative Final Report, the Littoral Combat Ship Platform Wholeness Concept of Operations, and documentation of lessons learned from the guided missile destroyer (DDG), mine warfare, and guided missile submarine communities. In addition, we queried the Navy Lessons Learned System for lessons learned pertaining directly to rotational crewing and reviewed Navy Lessons Learned System guidance. We assessed the Navy Lessons Learned System by interviewing program officials, requesting data queries by these officials and comparing the results of these queries with our own data queries, and determined the data were sufficiently reliable for our analysis. Commander, Mine Countermeasure Class Squadron Squadron One, Squadron Two, and Squadron Three U.S.S. Chief (MCM-14) Commander, Submarine Group Trident Naval Intermediate Maintenance Facility, Pacific Northwest (formerly Trident Refit Facility) Trident Training Facility U.S.S. Ohio (SSGN-726) We conducted focus group meetings with Navy submarine and ship officers and enlisted personnel who were involved in crew rotations. Focus groups involve structured small group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. As with typical focus group methodologies, our design included multiple groups with varying group characteristics but some homogeneity—such as rank and responsibility— within groups. Most groups involved 7 to 10 participants. Discussions were held in a structured manner, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. Our overall objective in using a focus group approach was to obtain views, insights, and feelings of Navy submarine and ship officers and enlisted personnel involved in crew rotations. To gain broad perspectives, we conducted 19 separate focus group sessions with multiple groups of Navy ship officers and enlisted personnel involved in crew rotations on a broad range of ship types, from small focused mission ships such as patrol coastals to larger, more complex ships such as nuclear-powered and nuclear-armed strategic missile submarines. Table 3 identifies the composition of the focus groups on each of the vessels. Across focus groups, participants were selected to ensure a wide distribution of officers, enlisted personnel, seniority, and ship departments. GAO analysts traveled to three naval stations to conduct the focus groups. We conducted focus groups with all ship communities currently participating in rotational crewing. The number of focus groups we conducted varied by ship community depending upon ship crew sizes, the types of crew member responsibilities (e.g., command, engineering, and maintenance) and the experience level of the crew members. We developed a guide to assist the moderator in leading the discussions. The guide helped the moderator address several topics related to crew rotations: training, maintenance, infrastructure and operations, management and oversight, readiness, crew characteristics, quality of life, lessons learned, and overall satisfaction with the rotational crewing experience. We assured participants anonymity of their responses, in that names would not be directly linked to their responses. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the focus group participants’ reasons for the attitudes held toward specific topics and to offer insights into the range of concerns and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, they represent the responses of Navy ship officers and enlisted personnel from the 19 selected groups. Second, while the composition of the groups was designed to assure a distribution of Navy officers, enlisted personnel, seniority, and ship departments, the groups were not randomly sampled. Third, participants were asked questions about their specific experiences with crew rotations. The experiences of other Navy ship officers and personnel involved in crew rotations, who did not participate in our focus group, may have varied. Because of these limitations, we did not rely entirely on focus groups, but rather used several different methodologies to corroborate and support our conclusions. GAO DRAFT REPORT – DATED APRIL 8, 2008 “FORCE STRUCTURE: Ship Rotational Crewing Initiatives Would Benefit From Top Level Leadership, Navywide Guidance, Comprehensive Analysis and Improved Lessons Learned Sharing” RECOMMENDATION 1: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy to assign clear leadership and accountability for managing rotational crewing efforts. DOD RESPONSE: Non-concur. The Department of the Navy has clear leadership and accountability for the manning of ships and submarines. This management structure includes oversight and leadership within both operational and administrative chains of command. These organizational structures provide for manning, training and equipping all Navy ships and submarines regardless of crewing concept. Additional organizational structure dedicated to rotational crewing is unnecessary and potentially counterproductive. RECOMMENDATION 2: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy to establish an overarching implementation team to provide day-to-day management oversight of rotational crewing efforts, coordinate and integrate efforts, and apply their results to the Fleet. DOD RESPONSE: Non-concur. The Navy already exercises day-to-day management to support ship and submarine manning and training. An implementation team dedicated to rotational crewing is unnecessary and potentially counterproductive. RECOMMENDATION 3: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy to develop and promulgate overarching guidance to provide the high-level vision and guidance needed to consistently and effectively manage, implement, and evaluate all rotational crewing efforts. DoD RESPONSE: Non-concur. The Navy has sufficient guidance in place to provide the high-level vision necessary to manage ship and submarine manning. Although rotational crewing includes some unique crew considerations and support requirements, the training and support of Sailors involved in rotational crewing are little different than those for Sailors in the standard crewing process. RECOMMENDATION 4: The GAO recommends that the Commander, U.S. Fleet Forces direct the development and promulgation of concepts of operations by all ship communities, using or planning to use rotational crewing, that include a description of how rotational crewing may be employed and the details of by whom, where, and how it is to be accomplished, employed, and executed. DoD RESPONSE: Partial concur. The Navy already uses appropriate concepts for Fleet operations. When or if additional rotational crewing is warranted, the Navy will issue specific guidance, instructions, and/or concepts of operations. RECOMMENDATION 5: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy, under the purview of the implementation team, to develop a standardized, systematic method for data collection and analysis, assessment and reporting on the results of rotational crewing efforts, including a comprehensive cost- effectiveness analysis that includes life cycle costs, for all rotational crewing efforts. DoD RESPONSE: Partial concur. The Littoral Combat Ship is the only new ship class that currently plans on implementing rotational crewing. The Navy has no plans for broad general application of rotational crewing to all ship classes, and a standing implementation team and data collection is unnecessary. The Navy will conduct appropriate studies to determine if and when additional rotational crewing is appropriate based on cost effectiveness. RECOMMENDATION 6: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy, under the purview of the implementation team, to require as part of the mandatory analysis of alternatives in the concept refinement phase of the defense acquisition process, assessments of potential rotational crewing options for each class of surface ship in development, including full life cycle costs of each crewing option. DoD RESPONSE: Partial concur. The Department of Defense agrees that all feasible crewing options should be considered during the concept refinement phase of the defense acquisition process. Ships determined to have a potential advantageous rotational crewing application will assess and include this option among the various crewing alternatives reported by the Analysis of Alternatives. RECOMMENDATION 7: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy to develop overarching guidance to ensure the systematic collection and dissemination of lessons learned pertaining specifically to rotational crewing. DoD RESPONSE: Non-concur. The Navy already uses “lessons learned” tools as part of the rotational crewing. Further guidance to use these tools is not needed. RECOMMENDATION 8: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy to incorporate components of the lessons learned approach outlined in the Atlantic Fleet DDG Sea Swap initiative concept of operations, including, among other things, establishing a lessons learned team, developing a data collection plan, and increasing use of the Navy Lessons Learned System. DoD RESPONSE: Non-concur. The Department of the Navy already relies on data collection and analysis from ships. Requiring already implemented rotational crewing efforts to adopt experimental data collection procedures is unnecessary. Procedures are already in place for crews, rotational or standard, to provide data to the chain of command to identify improvements. In addition to the contact named above, Patricia Lentini, Assistant Director; James R. Bancroft; Renee S. Brown; Karen (Nicole) Harms; Jeffrey R. Hubbard; Roderick W. Rodgers; Rebecca Shea; Christopher T. Watson; and Johanna Wong made significant contributions to this report.
The Navy faces affordability challenges as it supports a high pace of operations and increasing ship procurement costs. The Navy has used multiple crews on some submarines and surface ships and has shown it to increase a ship's operational availability. GAO was asked to evaluate the extent to which the Navy, for ship rotational crewing, has (1) employed a comprehensive management approach, (2) developed and implemented guidance, (3) systematically collected, analyzed data, and reported findings, and (4) systematically collected and used lessons learned. To conduct this work, GAO analyzed Department of Defense (DOD) and Navy documentation and best practices for transformation, conducted focus groups, and interviewed DOD and Navy officials. Rotational crewing represents a transformational cultural change for the Navy. While the Navy has provided leadership in some rotational crewing programs, the Navy has not fully established a comprehensive management approach to coordinate and integrate rotational crewing efforts across the department and among various types of ships. GAO's prior work showed that sound management practices for implementing transformational programs include ensuring top leadership drives the change and dedicating an implementation team. The Navy has not assigned clear leadership and accountability for rotational crewing or designated an implementation team to ensure that rotational crewing receives the attention necessary to be effective. Without a comprehensive management approach, the Navy may not be able to lead a successful transformation of its crewing culture. The Navy has promulgated crew exchange instructions for some types of ships that have provided some specific guidance and increased accountability. However, the Navy has not developed an overarching instruction that provides high-level guidance for rotational crewing initiatives and it has not consistently addressed rotational crewing in individual ship-class concepts of operations. Defense best practices hold that key aspects of a concept of operations include how a set of capabilities may be employed to achieve objectives and identifies by whom, where, and how it is to be accomplished. The Navy has conducted some analyses of rotational crewing; however, it has not developed a systematic method for analyzing, assessing and reporting findings on the potential for rotational crewing on current and future ships. Despite using a comprehensive data-collection and analysis plan in the Atlantic Fleet Guided Missile Destroyer Sea Swap, the Navy has not developed a standardized data-collection plan that would be used to analyze all types of rotational crewing, and life-cycle costs of rotational crewing alternatives have not been evaluated. The Navy has also not adequately assessed rotational crewing options for future ships. As new ships are in development, DOD guidance requires that an analysis of alternatives be completed. These analyses generally include an evaluation of the operational effectiveness and estimated costs of alternatives. In recent surface ship acquisitions, the Navy has not consistently assessed rotational crewing options. In the absence of this, cost-effective force structure assessments are incomplete and the Navy does not have a complete picture of the number of ships it needs to acquire. The Navy has collected and disseminated lessons learned from some rotational crewing experiences; however, some ship communities have relied on informal processes. The Atlantic Sea Swap initiative used a systematic process to capture lessons learned. However, in other ship communities the actions were not systematic and did not use the Navy Lessons Learned System. By not systematically recording and sharing lessons learned from rotational crewing efforts, the Navy risks repeating mistakes and could miss opportunities to more effectively implement crew rotations.
The Army’s current mission at Rocky Mountain Arsenal is to clean up the contaminated soils, structures, and groundwater there. The arsenal, established in 1942, occupies 17,000 acres northeast of Denver, Colorado, and is contaminated from years of chemical and weapons activities. The Army manufactured chemical weapons, such as napalm bombs and mustard gas, and conventional munitions until the 1960s and destroyed weapons at the arsenal through the 1980s. In addition, it leased a portion of the arsenal to Shell from 1952 to 1987 to produce herbicides and pesticides. In 1983, the United States sued Shell Oil Company for its share of the cleanup costs. In February 1989, after extended litigation, the Army and Shell signed the Rocky Mountain Arsenal Settlement Agreement and the related Rocky Mountain Arsenal Federal Facility Agreement. The agreements apportion cleanup costs to be paid by each party and costs to be shared by both, direct that environmental legislation be complied with, and provide a procedure for resolving disputes. An additional document, the Army/Shell Rocky Mountain Arsenal Financial Manual, provides an overview of financial, accounting, and auditing policies for costs related to the cleanup. Descriptions of the agreements and cost categories and guidance are contained in appendixes I and II. Shell uses contractors for cleanup activities. Two primary contracts provide for studies and cleanup activities and cover about 86 percent of Shell’s shared costs. A third contract provides for public affairs support. Each quarter, Shell provides the Army a claim for its allocable, or shared, costs. After review, the Army generates a quarterly statement, from which the Army determines how much each party owes. Under the agreements, the shared cost to be borne by each party is a percentage of the total shared costs (see table 1). As we previously reported, when the Army negotiated the settlement agreement, it estimated the shared cleanup cost would be less than $700 million, which would not have breached the demarcation between the 65/35 percent split and the 80/20 percent split. The Department of Defense (DOD) currently estimates the cost for arsenal cleanup at $2.1 billion. As of December 1995, the Army’s quarterly statement showed shared costs of $656 million. Army officials stated that shared costs reached $700 million in November 1996, and thus, the Army would begin paying 80 percent of the shared costs. According to Army officials, as of December 1995, the Army had incurred $308 million in costs not shared by Shell. Shell officials told us Shell’s nonallocable costs amounted to $95 million for studies, cleanup activities, and program management costs, including litigation. The Army’s process to review cost sharing claims under its settlement agreement with Shell is insufficient to ensure that costs are documented and appropriate. Weaknesses in the process involve (1) documentation to support claims, (2) agreements to define which costs should be shared, (3) separation of duties for recording and reviewing shared costs, and (4) documentation of decisions on the treatment of capital assets and disposition of real estate. Federal standards require that, among other elements, internal control systems provide reasonable assurance that assets are safeguarded and that revenues and expenditures are recorded and accounted for properly. The Arsenal Financial Manual allows costs to be disputed on several grounds. Specifically, costs can be disputed if: the work was not supported by a task plan, the work was not performed or the costs were not incurred, duplicate charges were made, or the costs were arbitrary and capricious in comparison with normal commercial practices. However, the Army’s review of the costs to be shared with Shell has been minimal. Our work showed that additional documentation is available in most cases and could have been reviewed by the Army. In some cases, however, more documentation would have been needed to perform detailed reviews. We examined 153 randomly selected summary vouchers covering $31 million of Shell’s allocable costs incurred from January 1988 to February 1995. As part of this examination, we reviewed documentation that Shell had provided the Army in support of its quarterly cost claim. We also reviewed secondary documentation maintained by the primary contractor. Based on these examinations and additional data later provided by Shell, we stated in our draft report that 31 entries for items totaling $3.1 million lacked the documentation needed for the Army to review the appropriateness of the cost claims. In some cases, the claims were partially documented, and in others, there was no documentation provided. In commenting on the draft report, Shell stated that in every instance, adequate information was either already in our possession or provided to us in meetings during March and April 1996. Shell further stated that full support was attached to invoices for each of three examples cited in our report. We again met with representatives of Shell and its principal contractor, Morrison Knudsen, in November 1996, but most of the documentation was not yet available and we agreed to examine additional documentation that was provided to us in December 1996. As a result of the most recent data, we revised the examples described below. The difficulty in obtaining documentation for the three examples illustrates our point that the Army needs to have procedures for documentation and the examination of claims. Taking the additional information into consideration, the following are examples from our sample of selected summary vouchers where insufficient documentation was available to make an adequate review of shared costs. For a $666,035 line item at first described as “other direct costs,” support for only $30,125 had been provided to us at the time of our draft report. Shell provided detailed support by December 1996 for an additional $479,015. The detailed support indicated that the costs were for contractor studies and left $156,895 in need of further documentation. $301,977 for brine disposal by a subcontractor did not have, at the time of our draft report, information on the quantity to be paid for, such as number and size of railroad tank cars. The separate agreements cited in Shell’s comments permitted payments up to a limit, but data on actual amounts were still needed. Such data were provided for $266,723, but were still lacking for the remaining $35,254. $187,275 of $326,566 for operations of an incinerator appeared to be for incentive awards but was not specified sufficiently, such as the number or type, to show the basis for the expenditure. The claim did not actually include awards, and support for $166,183 was provided in December 1996, although a clear link to invoices was not always shown. The remaining $21,092 lacked sufficient detail. Overall, the Army does not have detailed procedures for examining Shell’s shared costs. In the absence of such procedures, the Army’s examination consists of comparing Shell’s monthly costs with the previous month’s costs to look for significant variances. We found that the Army has not fully exercised its authority to review the costs of Shell’s contractors and subcontractors. For example, the Army shared about $48 million in costs that Shell claimed for technical studies, but has not examined the relevant contracts. Army officials said that they operate with Shell in an atmosphere of trust. They also stated that they believe that they have no right to interfere in Shell’s relationship with its contractors and that standard government contract controls do not apply to Shell’s commercial contracts. Notwithstanding these points, the Army is permitted to review Shell’s costs under the arsenal agreements and should do so to ensure that costs being claimed are appropriate. The arsenal agreements require that shared costs be supported by an approved task plan or other written agreement. The arsenal’s Program Manager’s Office and Shell officials have made numerous agreements implementing the guidance in the settlement agreement. However, not all agreements were written, and written agreements sometimes lacked approval signatures, estimates of costs to be incurred, clear descriptions of the tasks to be done, or statements that costs can be shared. Of the 153 summary vouchers we reviewed, 48 lacked specific written support, such as a signed agreement, a statement stipulating that the item was allocable or reimbursable, or authorization for the task. In some cases where signed agreements were lacking, Shell and the Army used their commercial and government practices as a standard in determining reasonableness of costs. Community relations activities is one area where cost sharing agreements have not been finalized and documentation was limited, thus making it difficult to adequately review claims. A written agreement was drafted and dated June 1990 (retroactive to January 1988), but was never signed. Although the unsigned agreement called for the Army to assume the lead responsibility in this area, Shell retained a contractor to provide public relations support. Shell and Army officials stated that for guidance on community relations activities, they refer to the requirements of the Comprehensive Environmental Response Compensation and Liability Act. Our random sample included $481,000 in charges for public affairs activities, and the Army had approved them based on two Shell statements of allocable costs that gave totals for broad categories. Incurred from August 1991 through December 1992, the largest categories were for public affairs activities regarding the successful operation of an incinerator ($245,047), public education/involvement ($120,927), and agency support ($73,864). Each category in the statements included a brief summary but no breakout of amounts for specific activities. Breakouts were often available on request, but detailed expense data were incomplete. For example, Shell provided additional data to us showing that public education/involvement included subcategories such as an arsenal brochure ($19,066), a Fish and Wildlife Service Spring Event ($14,480), and Bald Eagle Day ($15,567). Further, the detailed data for Bald Eagle Day showed $4,679 for unspecified labor costs; $4,622 for promotional “eagle pencils;” $3,026 for advertising; $1,278 for bus service; and other categories of less than $1,000 each for such items as photographs, videotape, copying, and box lunches. We did not review the appropriateness of individual cost claims. However, the above examples further demonstrate that the Army has not ensured it has sufficient information to review shared costs. The arsenal’s Director of Public Affairs stated that he would require supporting documentation on such claims in the future. Federal standards require that internal control systems provide reasonable assurance that expenditures are documented, recorded, and accounted for properly. We found that the Army has not adequately documented its decisions concerning some capital assets and real estate. For example, as part of interim response activities, Shell had to vacate an office building it owned and occupied on the arsenal. The Army provided land on the arsenal for Shell to build a replacement building. The Army also reimbursed Shell for the full $670,000 cost of construction. Several provisions in the arsenal agreements could allow construction to take place on the arsenal. Depending on the circumstances that caused the building to be vacated and a replacement built, the construction might have been an Army-only cost, a Shell-only cost, or a shared cost. In this case, the building was treated as an Army-only cost, but the reasons for this treatment were not documented. In another instance, the Army did not document the basis for a transaction with Shell. Shell purchased property located just outside the arsenal’s north boundary for about $4 million. The Army needed access to the land to conduct offsite groundwater treatment activities. The groundwater treatment was a shared cost. Shell purchased the land because it was able to do so more quickly than the Army would have been able to, according to Army and Shell officials. For its use of the property, the Army paid Shell about $2 million through transaction adjustments—half the purchase price. The land is well situated for commercial and industrial development as it is near an interstate highway and the new Denver International Airport (see fig. 1). Shell will retain the land when cleanup is complete. Another instance involved capital assets purchased by Shell and charged as an allocable cost. The Army could receive a proportionate credit for such assets as vehicles, office equipment, and furniture, when they are disposed of or sold. However, the identification and disposition of the allocable assets was not documented. In discussing this issue, Army and Shell officials did not provide detailed documentation, but described the disposition of a large set of assets relating to an incinerator. They stated that the Army had received a credit for items sold and that other items were being stored. Because the same Army staff members record, review, and audit Shell’s allocable costs, the Army does not have adequate control over the shared cost process. Federal internal control standards require that key duties and responsibilities such as recording and reviewing transactions be separated systematically among individuals to protect the government against error, waste, and wrongful acts. Moreover, the Army and Shell staff who conduct the day-to-day operation of the shared cost system also review the shared costs annually. In 1988 and 1989, the Army Audit Agency reviewed Shell’s costs and found numerous problems, including insufficient documentation and costs claimed without a task plan. Although the annual reviews by operating staff continue, there have been no other independent verifications or follow-on audits of Shell’s shared costs. The Army will be paying 80 percent of millions of dollars in shared costs for the cleanup of Rocky Mountain Arsenal. Strengthening its review process for shared cost claims is key to ensuring appropriate sharing of costs. Thus, we recommend that the Secretary of the Army establish specific procedures for the examination of Shell’s cost claims and documentation, including costs of its contractors and subcontractors; establish standard procedures for the approval and documentation of supplementary agreements regarding the allocability of costs and treatment of capital assets and real estate; and require that such key duties and responsibilities as recording and reviewing transactions be performed by different individuals. Both DOD and Shell provided written comments on a draft of this report (see apps. III and IV). DOD concurred with our recommendations regarding procedures for documentation of costs and agreements, but noted that adequate documentation exists for most shared cost claims. In its comments, Shell did not agree that documentation it made available was insufficient to review the appropriateness of the cost claims. In its comments concerning our two recommendations for procedures to ensure documentation of costs and agreements, DOD stated that most claims were documented. However, we identified cases where documentation for summary vouchers and cost sharing agreements for the tasks involved was lacking. We continue to believe that these conditions represent weaknesses in the Army’s review process. With regard to Shell documentation, we do not recommend action on individual items, but focus on the Army’s review process. We agree that Shell provided records, but the amounts did not always support the summary vouchers we examined. We believe that our comments regarding the weaknesses in the review process are correct, but revised our report to reflect the additional information provided by Shell and its contractor. Our initial review raised questions about support for 55 of 153 items. After discussing the 55 with Shell and its contractor and examining additional contractor documents during March and April 1996, we reduced the number of items with questions to the 31 cited in our draft report, including the 3 examples. Following Shell’s written comments, we met again in November and December regarding the examples. A substantially greater amount is now supported, but gaps remain in each example, as described in this report. Finally, DOD partially concurred with our recommendation for separation of duties, stating that it complies with requirements under procedures now in place. We recognize that internal controls are adapted to the risks being faced and the resources available. DOD has attempted to address such control issues by designating one person in a two-person group to be a staff accountant to review data and the other to make sure data are generally complete. We believe controls could be further strengthened by having others—who do not conduct the day-to-day operation—be responsible for the annual review of shared costs. This is a particular issue where only one external review has been made of transactions, and that was just after the settlement agreement was put in place 8 years ago. We interviewed officials at, and reviewed documentation provided by the arsenal Program Manager; Shell Oil Company, Denver, Colorado, and Houston, Texas; the Defense Contract Audit Agency, Boise, Idaho; Morrison Knudsen and Holme Roberts Owen, Denver, Colorado; and the state of Colorado. We obtained and reviewed Army and Shell shared cost documentation, but we did not verify the total reported costs. We reviewed 153 randomly selected items from Shell’s journal entries for allocable and reimbursable costs incurred from January 1988 to February 1995. We also reviewed all monthly invoices for allocable costs from the Shell contractors Morrison Knudsen and Holme Roberts Owen incurred for the fourth quarter, ending November 1988, through the third quarter 1995. We examined supporting documents provided by Shell and its contractors. We did not review the appropriateness of individual cost claims. Although we examined additional documentation provided by shell and its contractor for 3 examples in our report, we did not pursue additional documentation for the remaining 28 of the 31 sample items cited in the report. We conducted our review from April 1995 to December 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to appropriate congressional committees. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8412. Major contributors to this report are listed in appendix V. The Army and Shell formalized their agreements and guidance regarding activities and costs for environmental cleanup at the Rocky Mountain Arsenal in the Rocky Mountain Arsenal Settlement Agreement, the Federal Facility Agreement, and the Financial Manual. The Settlement Agreement establishes a mechanism for apportioning cleanup responsibilities and costs between the Army and Shell. The agreement defines allocable costs and includes lists of Shell-only and Army-only costs. Under this agreement, Shell may hire contractors “subject to the approval of the Army.” The Federal Facility Agreement ensures compliance with environmental legislation, including the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 (42 U.S.C. 9601), and establishes a procedure that allows the various participants to cooperate in environmental cleanup at the arsenal. It “provides the process for the planning, selection, design, implementation, operation, and maintenance of response actions taken pursuant to CERCLA as the result of the release or threatened release of hazardous substances, pollutants or contaminants at or from the arsenal, including the public participation process.” The Financial Manual describes the financial, accounting, and auditing procedures to be used for shared costs incurred in connection with arsenal cleanup. It describes primary and secondary documentation for allocable costs and includes examples of some documentation. It provides procedures under which cost-related disputes between the Army and Shell are to be settled, but it does not include procedures for examining and accepting shared costs. The Manual stipulates that the procedures described in it will be conducted in accordance with generally accepted accounting principles consistently applied. The following material summarizes cost definitions found in the Rocky Mountain Arsenal Settlement Agreement, which provides guidance regarding allocable, reimbursable, Shell-only, and Army-only costs. The Army and Shell supplement this guidance with agreements on the specific tasks to be included in each category. The Settlement Agreement defines allocable costs as all response costs, excluding Army-only and Shell-only costs; all response costs for activities outside the arsenal boundaries; associated costs for involvement of the Environmental Protection Agency, the Agency for Toxic Substances and Disease Registry, and the Department of the Interior; all natural resource damage assessment costs; and other costs agreed on in writing by the Army and Shell as allocable costs. Exhibit D of the Settlement Agreement describes Shell-only costs as those pertaining to the following actions: demolition, removal, and disposal of all buildings and structures owned by Shell or its predecessor company (includes a list of the structures); demolition, removal, and disposal of all equipment in Shell-owned structures and in buildings leased by Shell immediately before the effective date of the Settlement Agreement; assessment activities associated with the two above activities; Shell staff at the Central Repository and the Joint Administrative Record Shell activities associated with dispute resolution, judicial review, and the Technical Review Committee; and Shell’s program management, including labor, materials, supplies, and overhead for Shell’s Denver Project Site Team, litigation support, legal fees, and auditing expenses. Exhibit C of the Settlement Agreement describes Army-only costs as those pertaining to the following actions: assessment, demolition, removal, and disposal of all buildings, structures, and equipment not listed as Shell-only in Exhibit D; assessment, identification, removal, and disposal of unexploded ordnance; assessment, decontamination, removal, treatment, and/or disposal of all soil, excluding soil that includes a Shell compound, in specified areas; Army staff, and all facilities and equipment, for the Central Repository and the Joint Administrative Record and Document Facility; Army activities associated with dispute resolution, judicial review, and the Army program management, including labor, materials, supplies, and overhead for the Army’s arsenal Program Manager’s Office and its divisions, litigation support, legal fees, and auditing expenses; and other specific miscellaneous actions, such as emergency action responses to a release of pollutants or contaminants. Margaret Armen, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed cleanup costs claimed by Shell Oil Company and shared by Shell and the U.S. Army at Rocky Mountain Arsenal, Colorado, focusing on: (1) selected aspects of the processes that the Army uses to review cost claims under its settlement agreement with Shell; and (2) the adequacy of these processes. GAO found that: (1) the process the Army uses to review claims under its cost sharing for cleanup at the arsenal has not been sufficient to ensure that costs claimed by Shell are appropriate; (2) specifically, the review process does not always ensure that sufficient documentation is available to review claimed costs and formal agreements exist to define which costs should be shared; (3) the review process generally does not look at the detailed documentation supporting cost claims; (4) GAO's work showed in most cases further information was available, but in some cases it was not; (5) also, the review process does not have effective checks and balances, such as separation of key duties and responsibilities and independent reviews; (6) for example, staff associated on a daily basis with the shared cost system also conduct the annual assessment of the shared costs; and (7) the combination of limited documentation and inadequate controls places the government at the risk of paying for unwarranted charges.
With the enactment of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) in 1980, the Congress created the Superfund program to clean up the nation’s most severely contaminated hazardous waste sites. The Congress extended the program in 1986 and 1990 and is now considering another reauthorization. Under CERCLA, EPA investigates contaminated areas and places the most highly contaminated sites on the National Priorities List (NPL) for study and cleanup. As of December 1996, there were 1,210 sites on the NPL. After a site is placed on the NPL, EPA extensively studies and evaluates the site to determine the appropriate cleanup remedy for it. The remedy selected depends upon the site’s characteristics, such as the types and levels of contamination, the risks posed to human health and the environment, and the applicable cleanup standards. The site’s cleanup can be conducted by EPA or the party responsible for the contamination, with oversight by EPA or the state. Through fiscal year 1995, the latest period for which EPA has data, EPA had selected incineration as a Superfund cleanup remedy 43 times, or in about 6 percent of the decisions on remedies it had reached through that date.At the time of our review, three incinerators were operating at Superfund sites—the Bayou Bonfouca site in Louisiana, the Times Beach site in Missouri, and the Baird and McGuire site in Massachusetts. As of October 1996, EPA planned to use incineration at four additional sites. Incineration is the burning of substances by a controlled flame in an enclosed area that is referred to as a kiln. Incineration involves four basic steps: (1) wastes, such as contaminated soil, are prepared and fed into the incinerator; (2) the wastes are burned, converting contamination into residual products in the form of ash and gases; (3) the ash is collected, cooled, and removed from the incinerator; and, (4) the gases are cooled, remaining contaminants are filtered out, and the cleaned gases are released to the atmosphere through the incinerator’s stack. (See fig. 1.) Incinerators may be fixed facilities that accept waste from a variety of sources, or they may be transportable or mobile systems. Fixed facility hazardous waste incinerators are required by the Resource Conservation and Recovery Act of 1976 (RCRA) to obtain an operating permit from EPA. RCRA regulates all facets of the generation, transportation, treatment, storage, and disposal of hazardous wastes in the United States. RCRA requires that fixed facility hazardous waste incinerators be operated according to EPA’s regulations and be inspected by EPA every 2 years. Incinerators used to clean Superfund sites are generally “transportable,” that is, they are transported to the site in pieces, assembled, and removed when the cleanup is complete. These incinerators are constructed and operated by contractors. CERCLA exempts any portion of a cleanup action conducted entirely on-site, including incineration, from the need to obtain any permit. However, CERCLA requires EPA to apply legally applicable or relevant and appropriate environmental standards from other federal laws, including RCRA, to Superfund cleanups. Accordingly, EPA requires incinerators at Superfund sites to meet RCRA’s substantive requirements, such as the act’s standards for emissions. EPA relies on four principal methods to ensure the safe operation of incinerators used to clean up Superfund sites. These methods are (1) setting site-specific standards for emissions and operations, (2) incorporating safety features into an incinerator’s emergency systems, (3) monitoring emissions at the incinerator’s stack and along the site’s perimeter, and (4) providing 24-hour on-site oversight. (See app. I for more details on the safeguards at the three incinerators in operation at the time of our review.) EPA establishes specific cleanup standards for each incinerator used at a Superfund site. These standards are based on studies of the site’s characteristics (e.g., the type and concentration of contamination present) conducted during the incinerator’s design and construction. Standards can be adopted from other environmental programs or laws, such as RCRA or the Toxic Substances Control Act. Typically, RCRA’s standards for fixed facility hazardous waste incinerators are applied. RCRA’s standards govern the extent to which an incinerator must destroy and remove contaminants and set limits on emissions from the incinerator. EPA establishes the operating parameters needed for the incinerator to achieve the emissions standards and tests the parameters through a “trial burn” required under RCRA. The operating parameters can include the temperature of the kiln, the minimum oxygen levels needed to break down contaminants in the kiln, and the maximum carbon monoxide levels that may be produced. Although not required by EPA’s regulations, a trial burn plan was reviewed by a RCRA expert at all the sites we visited to determine whether the proper operating conditions were being tested. According to EPA officials, if the incinerator operates within the parameters established at the trial burn, the incinerator will be operating safely. Besides establishing standards for emissions and operations, EPA requires engineering controls to prevent the standards from being exceeded. In addition, incinerators at the three sites we visited had built-in safety features unique to each model to prevent excessive emissions of contaminants in the event of an emergency shutdown. RCRA’s regulations, which EPA applies at Superfund sites, require that incinerators have devices, called automatic waste feed cutoffs, that will stop contaminated waste from being fed into an incinerator when the operating conditions deviate from the required operating parameters. The waste feed would be cut off, for example, when a change in pressure or a drop in temperature occurred that could compromise the kiln’s effective incineration of the contaminants. These cutoffs are set with a “cushion” so that the waste feed shuts down before the incinerator operates outside the established parameters. The number and type of waste feed cutoffs will depend on the requirements for each site. According to EPA officials, some cutoffs are routine, to be expected during the normal course of an incinerator’s operations, and a sign that the safety mechanisms are working properly. For example, cutoffs can be triggered by expected changes in pressure within the kiln brought on by variations in the waste input stream. However, other cutoffs, especially repeated cutoffs, can be signs of problems. At the three sites we visited, all of the incinerators had some additional safety measures, not required by regulation, in the event that a critical part of the incinerator failed. At the Times Beach and the Bayou Bonfouca sites, the incinerators have emergency systems that fully shut down the incinerator and decontaminate the gases remaining in the system at the time of the shutdown. These systems seal off the gases and expose them to a high-temperature flame to destroy any residual contamination. At the Baird and McGuire site, the emergency system ensures that metals and particulates are removed before gases are emitted from the kiln. The most common reason for activating the emergency systems at the three sites was a shutdown caused by a power outage. EPA continuously monitors the air in the vicinity of an incinerator to ensure that emissions from the stack and from areas where soil is being excavated before being put into the incinerator do not exceed the maximum permitted levels. Air monitoring at the sites involves measuring conditions in real time and performing detailed laboratory analyses of samples that are collected over a longer period of time. For example, at the Baird and McGuire site, stack emissions are monitored continuously to measure key indicators of combustion, such as the oxygen levels in exhaust gases, to ensure that the incinerator is operating properly. For organic contamination, a more detailed laboratory analysis is carried out during the trial burn to provide additional assurance that dioxin, a cancer-causing substance produced by the burning of organic substances, is not excessively emitted. The Baird and McGuire site also has nine air monitors at its perimeter, each of which is hooked up to alarms that sound if emission levels approach the established parameters. These monitors, which are intended primarily to detect possible emissions from the on-site excavation of contaminated soil, monitor and record data every minute. According to the incineration contractor’s project manager at the Baird and McGuire site, the air monitors picked up elevated levels only once during an excavation, when a drum of chemicals was removed. In a situation such as this, the excavation is slowed to bring emissions down to required levels. According to EPA’s reports for the three sites we visited, emissions from the incinerators’ stacks never exceeded the permitted levels. Although 24-hour oversight is not required by regulations or formal EPA policy, Corps of Engineers or state officials continuously observed the operations of the incinerator at each of the sites we visited. For the two cleanups that EPA managed (at the Baird and McGuire and Bayou Bonfouca sites), EPA had contracted with the U.S. Army Corps of Engineers for on-site oversight, while at Times Beach, where a responsible party was conducting the cleanup, a Missouri state agency provided oversight. At the time of our visit, these sites had staff to cover operations 24 hours a day. For example, at Baird and McGuire, 12 Corps of Engineers staff were assigned to monitor the incinerator’s operations. On-site observation involves visual inspections and record reviews to ensure that the incineration companies are meeting the operating conditions specified by EPA. At the sites we visited, Corps of Engineers or state officials were responsible for checking the operating parameters displayed on computer screens in the incineration control rooms and inspecting measurement devices on incineration equipment to verify that they were working properly. For example, at Times Beach, a state official monitored operations from an on-site computer screen, while a state RCRA employee obtained the computerized information from his office in the state capitol to ensure that the conditions of the state’s RCRA permit were being met. At Bayou Bonfouca, Corps officials examined operation log books and talked to incinerator operators to look for any problems and oversaw the procedures for testing and sampling emissions from the incinerator. The officials were also responsible for reviewing the air-monitoring reports and operation summary reports required of the incineration company and reporting their findings to EPA. In addition to the safeguards discussed above, EPA planned two additional methods to promote the safe operation of Superfund incinerators but never fully implemented them. First, EPA issued a directive requiring inspectors from its hazardous waste incinerator inspection program to periodically evaluate Superfund incinerators. This requirement had not been followed at two of the three incinerators operating at the time of our review. Second, EPA has not carried out its intention to systematically ensure that the lessons learned about an incinerator’s operations in one incineration project are applied to subsequent projects. EPA is relying upon informal communication to transfer “best practices” from one incineration project to the next. In 1991, EPA issued a directive requiring that the same type of inspections that are conducted at RCRA-permitted hazardous waste incinerators be conducted at Superfund incinerators. In 1993, EPA issued interim guidance on how to perform these inspections at Superfund incinerators. This guidance required that inspectors in EPA’s regional offices review the operating records for Superfund incinerators and examine the units to ensure that they were operating within their established parameters. Only one of the three incinerators we visited had received such an inspection. That incinerator received two inspections, one of which was conducted while the incinerator was shut down for maintenance. EPA regional staff we talked to were unaware of the directive and guidance on these inspections. EPA headquarters personnel told us that they were unaware that the inspections were not taking place but confirmed with the regions that only one region was inspecting Superfund incinerators. EPA officials attributed the lack of inspections to the higher priority given to other enforcement demands and a reorganization of enforcement functions, which muddied the responsibility for inspecting the incinerators. Headquarters officials said they would encourage the regions to do the inspections in the future. According to officials from EPA’s Office of Enforcement and Compliance Assurance (OECA), who are responsible for implementing the inspection program, RCRA incinerator inspectors had visited Superfund incinerators when the guidance was first issued in 1993. However, these inspectors said their inspections were hampered because they did not have a site-specific document containing the requirements for each incinerator’s operations that they could use to evaluate these operations. At Superfund sites where transportable incinerators are used, EPA may specify standards, operating parameters, emergency controls, and requirements for air monitoring and on-site oversight in various documents, such as a contract with the operator of the incinerator, a court-approved consent decree with the responsible party, or a work plan for the site. In contrast, fixed facility hazardous waste incinerators require a RCRA permit, which documents the conditions under which an incinerator must operate. Inspectors use the conditions specified in the permit as criteria for evaluating the incinerator’s operations. For Superfund incinerators, however, an operating permit is not required. The 1993 interim guidance for inspecting Superfund incinerators recognized the need for a single document specifying site-specific operating requirements and procedures and stated that such a document would be developed. However, no such document was developed because, according to EPA officials, other priorities intervened. EPA officials attributed the lack of recent Superfund incinerator inspections, in part, to the lack of a consolidated list of requirements. The Superfund, RCRA, and OECA officials we interviewed on this question agreed that Superfund incinerators should be inspected. They stated that experienced RCRA hazardous waste incinerator inspectors in EPA’s regional offices have knowledge and experience that makes them well qualified to evaluate the operations of Superfund incinerators. These officials believed that an inspection by an outside, independent inspector was important even if an incinerator had on-site oversight. RCRA officials told us that at the few RCRA-permitted hazardous waste incinerators with on-site inspectors, the inspectors are rotated every 6 months in order to maintain their independence and objectivity. In addition, they said that experienced incinerator inspectors would have more expertise than the Corps of Engineers or state staff assigned to oversee the incinerators’ operations. Although these staff do receive training, they are generally not experts on incineration. Because EPA site managers may work on as few as one or two projects at a time and because incineration is not a common remedy at Superfund sites, managers may have limited experience with incineration. However, EPA does not have any formal mechanism to share the lessons learned about an incinerator’s operations. The need for information-sharing is illustrated by experiences at two sites we visited. The Bayou Bonfouca site had a policy to stop feeding waste to the incinerator during severe storms. This policy was adopted to reassure the public that the incinerator would not suffer an emergency shutdown during a storm-related power outage. The Times Beach site, which was using the same incinerator model, did not formally adopt this policy until after a severe storm had knocked out the power at that incinerator, causing an emergency shutdown. The storm and power outage caused the emergency emissions system and the perimeter air monitors to fail. (See app. I for details.) The lessons learned from these problems could be applied to future incineration projects to prevent similar problems from arising. However, EPA has no formal mechanism to ensure that other incineration projects can benefit from the Times Beach experience. EPA officials agreed that they should be sharing the lessons learned from each site. According to officials, they had intended to do so by issuing fact sheets, but the effort was dropped before any fact sheets were issued. The officials stated that the fact sheets were not issued because of a fear that information on problems with incinerators’ operations could be used against them in litigation. In addition, they attempted to have monthly conference calls with all of the managers of incineration sites, but the effort soon faded away. However, EPA officials told us that they do informally share lessons learned through discussions with regional staff responsible for incineration sites. Also, they encourage site managers to visit other incineration sites to learn from the experiences there; however, they do not currently intend to revive their plans for preparing fact sheets. EPA employs a number of techniques to encourage the safe operation of Superfund incinerators. These techniques include mechanical features, such as air monitors, as well as operational procedures, such as 24-hour independent oversight. However, residents of the areas surrounding incinerators frequently desire an extra degree of assurance that the incinerators are operating safely. EPA has not followed through on other opportunities to improve its oversight of incinerators and thereby provide additional assurance to the public. First, EPA has not followed its own policy of having RCRA hazardous waste incinerator inspectors inspect Superfund incinerators. Although these inspections would provide the public with independent evaluations of the incinerators’ compliance, they did not take place, in part, because consolidated lists were not made available to inspectors of the standards, design requirements, and operating rules for each site where incineration is used. Inspectors could use such lists, just as they use the operating permits for fixed facility hazardous waste incinerators, as an aid in evaluating compliance. Second, EPA’s attempts to systematically share the lessons learned from site to site were never fully implemented. Because incinerators are being used at relatively few Superfund sites, EPA project managers may have little or no experience with them. These managers would benefit from the experiences of other managers of sites where incinerators have been used. At the sites we visited, operational problems occurred that might be avoided at other incineration projects if the knowledge gained was preserved and shared. To provide further assurance that incinerators at Superfund sites are being operated safely, we recommend that the Administrator, EPA, implement the agency’s guidance for having RCRA hazardous waste incinerator inspectors evaluate Superfund incinerators, including the development of a single document specifying site-specific operating requirements and procedures for these incinerators, and document the lessons learned about safe operation from the experiences of each Superfund site where incineration is used and institute a systematic process to share this information at other sites where incinerators are used. We provided copies of a draft of this report to EPA for its review and comment. On January 29, 1997, we met with EPA officials, including a senior process manager from EPA’s Office of Emergency and Remedial Response and officials from EPA’s Office of Enforcement and Compliance Assurance and Solid Waste and Emergency Response, to obtain their comments. EPA generally agreed with the facts, conclusions, and recommendations in the report. However, while not disagreeing that the lessons learned should be documented, EPA did question the benefits of preparing voluminous site-specific studies on lessons learned, given the decreasing use of incineration. We concur that the type of documentation should be concise and the format useful. EPA also provided technical and editorial comments, which we incorporated in the report as appropriate. To examine EPA’s oversight of incinerators at Superfund sites, we visited the three Superfund sites with operating incinerators: the Baird and McGuire site in Massachusetts, the Bayou Bonfouca/Southern Shipbuilding site in Louisiana, and the Times Beach site in Missouri. At these sites, we spoke with EPA, state government, U.S. Army Corps of Engineers, and contractor officials to determine how the incinerators operate, what safety measures they employ to ensure safe operation, and what oversight activities occur. We also interviewed EPA officials in regions I, VI, and VII and in the headquarters offices of Solid Waste, Emergency and Remedial Response; Pollution Prevention and Toxics; and Enforcement and Compliance Assurance. In addition, we obtained and analyzed documents and data from EPA and from the relevant states, counties, and responsible parties when necessary. Our work was performed in accordance with generally accepted government auditing standards from February through December 1996. As arranged with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies of the report to other appropriate congressional committees; the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Should you need further information, please call me at (202) 512-6520. Major contributors to this report are listed in appendix II. We visited the three Superfund incinerators that were in operation at the time of our review: the Baird and McGuire site in Holbrook, Massachusetts; the Bayou Bonfouca/Southern Shipbuilding site in Slidell, Louisiana; and the Times Beach Superfund site near St. Louis, Missouri. A brief description of the incineration project at each site follows. The Baird and McGuire site, approximately 14 miles south of Boston, is a former chemical manufacturing facility that operated for 70 years until it was shut down in 1983. This 20-acre site is contaminated with approximately 200,000 pounds of chemicals and metals, including creosote, herbicides and pesticides, arsenic, lead, and dioxin. Chemicals from the site have contaminated groundwater, a nearby river, and a nearby lake. EPA chose to incinerate soil and other contaminated material on-site because it judged that this remedy would be the most protective of human health and because complicating factors made other remedies, such as covering the contaminated areas with a clay cap, inappropriate. These factors included the location of part of the site in a 100-year flood plain, the existence of wetlands on the site, and the potential for the contamination to spread farther (via groundwater) if the site was not effectively treated. In addition, dioxin is present at the site, leaving few off-site treatment possibilities because regulations limit the locations at which dioxin-contaminated material can be treated. The operation of the incinerator at the Baird and McGuire site began in June 1995 and is expected to be completed in April 1997. The incinerator was designed specifically to remediate the high levels of metal contamination at the site. (See fig. I.1.) It is configured to capture the metals (which cannot be destroyed by the incineration process and may be present in the gases produced by the burn) in a pollution control device before they are emitted into the atmosphere. The incinerator has 13 automatic waste feed cutoffs. In case the incinerator is totally shut down, a diesel backup system will keep filtration systems running to prevent the release of hazardous emissions. Emissions from the site are monitored continuously from the incinerator’s stack and from nine locations along the site’s perimeter. Oversight is carried out by 12 staff from the U.S. Army Corps of Engineers, who receive technical assistance from an engineering consulting firm. According to a Corps engineer at the site, the Corps staff complete inspection reports detailing on-site events 2 to 3 times per day and provide weekly summary reports for EPA’s review. The Bayou Bonfouca site includes 55 acres of sediment and surface water that were contaminated with wood-treating chemicals from an abandoned creosote works plant. The main threats to human health at this site included direct contact with contaminated groundwater, the potential for contamination to spread to a nearby waterway during flooding, and the potential for direct contact with concentrated hazardous material at the unsecured site. From February 1992 through September 1995, EPA incinerated contaminated soil and other material. After incinerating the waste from the Bayou Bonfouca site, EPA began to use the incinerator to burn similar wastes from the nearby Southern Shipbuilding Superfund site. (See fig. I.2.) This site was contaminated with 110,000 cubic yards of sludge, containing mostly polycyclic aromatic hydrocarbons that were left from barge cleaning and repair operations. Polycyclic aromatic hydrocarbons are chemicals formed during the incomplete burning of coal, oil, gas, refuse, or other organic substances. In addition to 15 automatic waste feed cutoff parameters to prevent the incinerator from operating outside the regulatory limits, the incinerator has an emergency stack venting system that further treats the gases from the kiln if the incinerator is totally shut down. In case of a power outage or another event that would cause the major functions of the incinerator to fail, this emergency system draws the kiln gases into an emergency stack where a flame further destroys contaminants. According to an incineration contractor official at the Bayou Bonfouca site, this emergency system prevents the release of kiln gases that exceed emission regulations. Oversight at the Bayou Bonfouca site is carried out by a team of nine Corps of Engineers inspectors. These inspectors check the computer screens in the incinerator’s control room every 2 hours to ensure that the incinerator is operating within the regulatory parameters set during the trial burn. The Corps team also inspects the incinerator’s machinery, is present for all sampling and testing done by the incineration company, and documents all of the automatic waste feed cutoffs. Corps officials review monthly, quarterly, and yearly reports from the incineration contractor. Air monitoring at the site includes continuous monitoring from the stack, the excavation site, and other areas of the site, and samples are taken daily for more complete chemical analysis. According to Corps officials, emissions have never exceeded regulatory levels. In addition, EPA Region VI had two RCRA inspections completed at the Bayou Bonfouca site. However, the incinerator was shut down for maintenance at the time of one of the inspections. This Bayou Bonfouca/Southern Shipbuilding project was completed in November 1996. The Times Beach Superfund site is a 0.8-square-mile area, 20 miles southwest of St. Louis, that was contaminated with dioxin. The contamination resulted from spraying unpaved roads with dioxin-tainted waste oil to control dust. EPA decided to incinerate soil from Times Beach and 26 other nearby sites that were contaminated in the same way. (See fig. I.3.) EPA believed that incineration was the best remedy for the large volumes of dioxin-contaminated soil and the large pieces of contaminated debris to be treated. The incineration project at Times Beach began in March 1996 and is expected to be completed in March 1997. The Times Beach site is unusual because EPA obtained a RCRA permit to operate the incinerator. A permit is generally not required at Superfund sites, and the process of obtaining it resulted in some delays in beginning operations. However, EPA regional officials obtained the permit to provide nearby residents with additional assurance that the incinerator would operate safely and would be removed after the project was completed, rather than being kept in place to burn contaminated material from other sites. As required by the permit, the Times Beach incinerator has 17 automatic waste feed cutoffs. In addition, the incinerator includes the same emergency system that is used at Bayou Bonfouca. Oversight at Times Beach is handled primarily by the Missouri Department of Natural Resources. State officials monitor operations on-site and via computer in the state capitol. Three on-site state employees originally provided oversight 24 hours a day. Currently, the state has oversight officials at the site 11-1/2 hours each weekday and 9 hours a day on the weekend. In addition, they conduct unannounced random visits to the site during off hours. To supplement the state’s oversight, St. Louis County inspects operations and tracks the results of air-monitoring testing to ensure that the incinerator’s emissions are in compliance with the limits set in the county’s air pollution permit. According to a county official, although formal inspections are required about once every 2 years, the county informally monitors the site more frequently. As with the other sites, Times Beach has two levels of air monitoring: continuous monitoring and a more detailed laboratory analysis. According to EPA officials, emissions from the incinerator have never exceeded the permissible levels. Despite extensive monitoring at the Times Beach site, incidents have occurred. Once, when an unexpected storm interrupted electrical power and caused a shutdown, the emergency system failed to fire. High winds had blown out the pilot lights on this treatment system, which should have fired after the power to the incinerator had been lost. Without the firing, the emergency system did not further treat the kiln gases as it was designed to do. Although EPA concluded that the event caused no significant health effects, the agency could only estimate emission levels during the shutdown because the air-monitoring equipment that would have recorded the actual emission levels was on the same circuit as the incinerator and, therefore, was not operating during the event. To prevent future emergency shutdowns from storm-related power losses, the incineration contractor hired local weather forecasting services to improve storm warnings and formally adopted a standard operating procedure to stop the waste feeds during severe weather. (This standard operating procedure had already been in force at the Bayou Bonfouca/Southern Shipbuilding Superfund site when the event occurred.) In addition, other measures were taken to prevent the emergency system’s pilot lights from being blown out and to decrease the number of power outages. Improper handling of the emission samples taken during a dioxin stack test was alleged following the discovery that the test samples were taken by a company that is a subsidiary of the incineration contractor. EPA maintains that the incinerator operator followed all required procedures for testing the samples. EPA has no regulation that prohibits the incineration contractor or one of its subsidiaries from taking, transporting, or analyzing the test samples. In addition, the time taken to deliver the samples to the laboratory was questioned—8 days from the time the samples left the site until they arrived at the laboratory. According to EPA officials, the samples are stable, making the time taken to get them to the laboratory unimportant. State officials reviewed the testing and determined that the results were valid. However, in December 1996, the EPA Ombudsman issued a report on the allegations and recommended that a new stack test be conducted to ensure public confidence in the cleanup. EPA agreed to implement the Ombudsman’s recommendation. James F. Donaghy, Assistant Director Jacqueline M. Garza, Staff Evaluator Richard P. Johnson, Attorney Adviser William H. Roach, Jr., Senior Evaluator Paul J. Schmidt, Senior Evaluator Magdalena A. Slowik, Intern Edward E. Young, Jr., Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) use of incineration at Superfund sites, focusing on: (1) what safeguards EPA uses to promote the safe operation of incinerators at these sites; and (2) whether EPA has fully implemented its planned system of safeguards. GAO noted that: (1) EPA relies upon four main methods to promote the safe operation of incinerators used at Superfund sites; (2) these methods are: (a) required site-specific standards for an incinerator's emissions and performance; (b) engineering safety features built into the incinerator's systems; (c) air monitoring to measure the incinerator's emissions; and (d) on-site observation of the incinerator's operations; (3) EPA sets standards after it studies each site's characteristics; (4) each incinerator is designed with safety features intended to stop its operation if it fails to meet the specified operating conditions; (5) air monitors are placed in the incinerator's stack and around the site's perimeter to measure the incinerator's emissions; (6) at the three Superfund sites with ongoing incineration projects at the time of GAO's review, EPA had arranged for 24-hour, on-site oversight from either the U.S. Army Corps of Engineers or a state government to ensure that the incinerator was operating properly; (7) in addition to the four methods discussed above, EPA managers intended to use two other techniques, inspections and applications of lessons learned, to encourage safe operations, but neither was fully implemented; (8) EPA has not used inspectors from its hazardous waste incinerator inspection program to evaluate the operations of all Superfund incinerators as it required in a 1991 directive; (9) only one of the three incinerators GAO visited had received such an inspection; (10) EPA regional staff responsible for hazardous waste incinerator inspections were unaware that the Superfund incinerators were supposed to be inspected, and EPA headquarters officials were unaware that the inspections were not occurring; (11) EPA managers did not follow through on their intention to systematically apply the lessons learned from incineration at one site to other sites; (12) they had intended to prepare documents describing problems and solutions at each incineration project for use in designing and operating other projects and to hold periodic conference calls with the managers from incineration sites to discuss issues of common interest; (13) both of these methods of transferring information were dropped for various reasons; (14) GAO found that the lessons learned from the problems experienced at the sites GAO visited could benefit other sites; and (15) EPA headquarters officials told GAO that they encouraged Superfund project managers to share their experiences with incineration but had not facilitated this exchange in a structured way.
ATSA, signed into law on November 19, 2001, shifted certain responsibilities for aviation security from commercial airport operators and air carriers to the federal government and the newly created Transportation Security Administration. Specifically, ATSA granted TSA direct operational responsibility for the screening of passengers and their baggage, as well as responsibility for overseeing U.S. airport operators’ efforts to maintain and improve the security of commercial airport perimeters, access controls, and workers. While airport operators, not TSA, retain direct day-to-day operational responsibility for these areas of security, ATSA’s sections 106, 136, and 138 direct TSA to improve the security of airport perimeters and the access controls leading to secured airport areas, as well as measures to reduce the security risks posed by airport workers, as shown in figure 1. On February 17, 2002, TSA assumed responsibility from FAA for certain aspects of security at the nation’s commercial airports, including FAA’s existing aviation security programs, plans, regulations, orders, and directives. Soon thereafter, on February 22, 2002, the Department of Transportation issued regulations to reflect the change in jurisdiction from FAA to TSA. Also, TSA reissued security directives originally issued by FAA after September 11, 2001, related to perimeter and access control security. TSA hired 158 federal security directors (FSDs) to oversee the implementation of these requirements at airports nationwide. The FSDs also work with inspection teams from TSA’s Aviation Regulatory Inspection Division to conduct compliance inspections. In addition, as part of its oversight role, TSA headquarters staff conducts covert testing and vulnerability assessments to help individual airport operators determine how to improve security and to gather data to support systemwide analysis of security vulnerabilities and weaknesses. Airport operators are responsible for implementing TSA security requirements for airport perimeters, access controls, and airport workers. Each airport’s security program, which must be approved by TSA, outlines the security policies, procedures, and systems the airport intends to use in order to comply with TSA security requirements. There are about 450 commercial airports in the United States. Depending upon the type of aircraft operations, airport operators must establish either complete, supporting, or partial security programs. Complete security programs include guidelines for performing background checks on airport workers, providing security training for these workers, and controlling access to secured airport areas, among other things. Federal regulations also require that commercial airports with complete security programs designate areas where specific security practices and measures are in place and provide a diagram of these areas. Figure 2 is a diagram of a typical commercial airport and the security requirements that apply to each airport area. Air Operations Area (AOA) Signs at access points and perimeters that warn against this area. Access controls used that meet performance standards (e.g. proximity cards and personal identification number) TSA classifies airports into one of five categories (X, I, II, III, and IV) based on various factors, such as the total number of take-offs and landings annually, the extent to which passengers are screened at the airport, and other special security considerations. U.S. commercial airports are divided into different areas with varying levels of security. Individual airport operators determine the boundaries for each of these areas on a case-by- case basis, depending on the physical layout of the airport. As a result, some of these areas may overlap. Secured areas, security identification display areas (SIDA), and air operations areas (AOA) are not to be accessed by passengers, and typically encompass areas near terminal buildings, baggage loading areas, and other areas that are close to parked aircraft and airport facilities, including air traffic control towers and runways used for landing, taking off, or surface maneuvering. On the other hand, sterile areas are located within the terminal where passengers wait after screening to board departing aircraft. Access to these areas is controlled by TSA screeners at checkpoints where they conduct physical screening of passengers and their carry-on baggage for weapons and explosives. According to TSA estimates, there are about 1,000,000 airport and vendor employees who work at the nation’s commercial airports. About 900,000 of these workers perform duties in the secured or SIDA areas. Airport operators issue SIDA badges to these airport workers. These badges identify the workers and grant them the authority to access the SIDA and secured areas without an escort. Examples of workers with unescorted access to the SIDA and secured areas include workers who access aircraft, including mechanics, catering employees, refuelers, cleaning crews, baggage handlers, and cargo loaders. TSA estimates there are an additional 100,000 employees who work in sterile airport areas, such as the concourse or gate area where passenger flights load and unload. Examples of employees who work or perform duties in the sterile area include those operating concessions and shops, and other air carrier or vendor employees. Other workers may, from time to time, need to enter the SIDA or secured area and must be accompanied by an escort who has been granted unescorted access authority. According to TSA, only a relatively small number of airport workers need regular escorted access to the SIDA and secured areas. Job functions in this category would include delivery personnel, construction workers, and specialized maintenance crews. Methods used by airports to control access through perimeters or into secured areas vary because of differences in the design and layout of individual airports, but all access controls must meet minimum performance standards in accordance with TSA requirements. There are a variety of commercially available technologies that are currently used for these purposes or are used for other industries but could be applied to airports. In addition, TSA has a research and development program to develop new and emerging technologies for these and other security- related purposes. TSA has three efforts under way to evaluate the security of commercial airports’ perimeters and the controls that limit unauthorized access into secured areas. While ATSA only requires that TSA perform compliance inspections, the agency also relies on covert testing of selected security procedures and vulnerability assessments to meet the legislation’s mandate to strengthen perimeter and access control security. TSA acknowledged the importance of conducting these evaluation efforts as an essential step to determine the need for, and prioritization of, additional perimeter security and access control security measures. But the agency has not yet established several elements needed for effective short- and long-term management of these evaluations, such as schedules for conducting its efforts and an analytical approach to using the results of its evaluations to make systematic improvements to the nation’s commercial airport system. ATSA, (Sec. 106 (c)(2)), requires TSA to assess and test for airport compliance with federal access control security requirements and report annually on its findings. TSA originally planned to conduct comprehensive assessments at each commercial airport periodically. Staff from TSA’s Aviation Regulatory Inspection Division along with local airport inspection staff working under federal security directors completed relatively few comprehensive airport inspections in fiscal year 2002, although TSA completed considerably more in 2003. In addition, TSA records indicated that a significant number of individual, or “supplemental” inspections of specific areas of security or local airport security concerns were conducted in fiscal years 2002 and 2003, respectively. TSA, however, did not identify the scope of these inspections, or how many airports were inspected through its supplemental inspections. In addition, the agency did not report on the results of these comprehensive or individual supplemental inspections, as required by ATSA. According to TSA, the agency was limited in its ability to analyze these data because compliance reports submitted during this time frame were compiled in a prototype reporting system that was under development. In July 2003, TSA deployed the automated system—Performance and Results Information System (PARIS)—and began to compile the results of compliance reviews. In TSA’s Annual Inspection and Assessment Plan for fiscal year 2004, TSA revised its approach for reviewing airport operator compliance with security regulations. According to TSA, the new inspection process uses risk management principles that consider threat factors, local security issues, and input from airport operators and law enforcement to target key vulnerabilities and critical assets. Under the new inspection process, the local federal security director at each airport is responsible for determining the scope and emphasis of the inspections, as well as managing local TSA inspection staff. According to the agency, the continuous inspections approach resulted in completion of a significant number of individual inspections of airport access controls and other security requirements in the first few months of fiscal year 2004. The percentage of inspections that found airport operators to be in compliance with security requirements, including those related to perimeters and access control, was high. According to TSA, its goal is for airport operators to be in 100 percent compliance with security requirements. Despite the generally high compliance rates, TSA identified some instances of airport noncompliance involving access controls. According to TSA, the agency’s new approach to conducting compliance inspections is designed to be a cooperative process based on the premise that voluntary and collaborative airport operator compliance to facilitate solutions to security issues is more effective than the use of penalties to enforce compliance. This approach is intended to identify the root causes of security problems, develop solutions cooperatively with airport operators, and focus the use of civil enforcement actions on the most serious security risks revealed by TSA’s inspections. As a result, TSA said that the majority of airport inspection violations related to airport security was addressed through on-site counseling with airport operator officials, rather than administrative actions or civil monetary penalties, which TSA is authorized to issue when airport operators fail to address identified areas of noncompliance. According to TSA, on-site counseling is used only for minor infractions that can be easily and quickly corrected. Administrative actions progress from a warning notice suggesting corrective steps to a letter of correction that requires an airport operator to take immediate action to avoid civil penalties. TSA was able to provide the number of cases in which it recommended the issuance of civil penalties to airport operators for violations of security requirement. Table 1 shows the various types of enforcement actions used by TSA to address airport operator noncompliance with security requirements for the period between October 2003 and February 2004. TSA had not assessed the effectiveness of these penalties in ensuring airport compliance with security requirements as required by ATSA (Sec. 106 (c)(2)). TSA said the agency was not able to conduct inspections at all commercial airports in prior years, or assess the effectiveness of the use of penalties to ensure airport compliance because of limited personnel assigned to perform these tasks and agency decisions to direct these resources to address other areas of aviation security, such as passenger and baggage screening operations. According to TSA, the primary focus of field inspectors was to monitor passenger and baggage screening operations immediately following the attacks of September 11. As a result, routine inspections were not assigned as high a priority during the months following the attacks. For example, while DHS authorized TSA to use 639 full-time employees for the purpose of performing airport security inspections in fiscal year 2003, TSA allocated 358 full-time employees for this purpose. TSA said that the agency is hiring new regulatory inspectors at airports to help conduct required inspections. In its fiscal year 2005 budget submission, TSA requested over 1,200 full-time employees to conduct compliance inspections. TSA said airport compliance inspections are needed to ensure that airport operators take steps to address deficiencies as they are identified. TSA also said that the agency has proposed measuring the performance of individual airport against national performance averages, and airports that fall below accepted levels of compliance will receive additional inspections or other actions. However, TSA has not yet developed a plan outlining how the results of its compliance inspections will be used to interpret and help analyze the results of airport vulnerability assessments and covert testing. For example, at the time of our review, a majority of airports tested had high compliance rates, indicating that these airports are implementing most security regulations. However, assessing airport operator compliance with security requirements as a stand-alone measure does not provide a complete picture of the level of security at these airports. Covert testing and vulnerability assessments provide additional information that, taken together with the results of compliance inspections, provide a more complete picture of the security environment at commercial airports on a systemwide basis. From September to December 2003, TSA conducted vulnerability assessments at some of the nation’s commercial airports to help individual airport operators determine how to improve security. At the time of our review, TSA had not established a schedule for completing assessments at the remaining airports. TSA is conducting these vulnerability assessments as part of a broader effort to implement a risk management approach to better prepare for and withstand terrorist threats. A risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions. (See app. II for a description of risk management principles and TSA’s tools for implementing these principles.) TSA uses various threat scenarios that describe potentially dangerous situations as a basis for conducting its vulnerabilities assessments. During the assessments, TSA and airport operators review the scenarios and rank them according to the risk each poses to the individual airport. As part of each vulnerability assessment, TSA provided airport operators with a report on the results and recommended short- and long-term countermeasures to reduce the threats identified. According to TSA, some of these countermeasures may be difficult for (1) airport operators to implement because of limited availability of security funding and (2) TSA to mandate because issuing new security regulations is an often time- consuming process that involves public comment and analysis of potential impacts. However, TSA does have authority under 49 U.S.C. § 114(l)(2) to issue regulations or security directives immediately in order to protect transportation security. Various sources have highlighted the importance of TSA’s continuing efforts to assess airport vulnerabilities. For example, in December 2003, the President issued a directive calling for assessments of the vulnerability of critical infrastructure, including airports, to assist in developing the nation’s homeland security strategy. In addition, TSA data on reported security breaches of airport access controls revealed that such known breaches have increased in recent years. Further, airport operator officials we spoke with noted the importance of vulnerability assessments as the key step in determining needed security enhancements at each airport. Specifically, airport security coordinators at 12 of the nation’s 21 largest and busiest airports said that a TSA vulnerability assessment would facilitate their efforts to comprehensively identify and effectively address perimeter and access control security weaknesses. At the time of our review, TSA had allocated 9 staff to conduct the vulnerability assessments and another 5 staff to analyze the results. According to TSA, these staff also perform other assessment and analytical tasks. Although TSA initially said that it expected to conduct additional assessments in 2004, the agency suspended its efforts to use established threat scenarios to assess vulnerabilities in January 2004. TSA said that the agency elected to redirect staff resources to conduct higher priority assessments of the threat posed by shoulder-fired missiles, also referred to as man portable air defense systems (MANPADS). In addition, TSA said that the agency planned to begin conducting joint vulnerability assessments with the FBI. The FBI previously conducted joint assessments with FAA in response to requirements established in the Federal Aviation Administration Reauthorization Act of 1996. At the time of our review, TSA said that the agency had not yet determined how to allocate its resources to conduct vulnerability assessments using established threat scenarios versus initiating joint assessment efforts with the FBI. When TSA resumes its scenario-based assessment efforts, the agency plans to prioritize its efforts by focusing on the most critical airports. (TSA said the agency intends to determine the criticality of commercial airports based on factors such as current threat intelligence, the number of fatalities that could occur during an attack on the airport, and the economic and sociopolitical importance of the facility.) After TSA resumes its assessment efforts, the agency intends to compile baseline data on security vulnerabilities to enable it to conduct a systematic analysis of airport security vulnerabilities on a nationwide basis. TSA said such an analysis is essential since it will allow the agency to determine minimum standards and the adequacy of security policies and help the agency and airports better direct limited resources. Nonetheless, at the time of our review, TSA had not yet developed a plan that prioritizes its assessment efforts, provides a schedule for completing these assessments, or describes how assessment results will be used to help guide agency decisions on what, if any, security improvements are needed. Through funding of a limited number of security enhancements, TSA has helped to improve perimeter and access control security at some airports. However, at the time of our review, TSA had not yet developed a plan to prioritize expenditures to ensure that funds provided have the greatest impact in improving the security of the commercial airport system. Concerning evaluations of security technologies, ATSA contained three provisions (Secs. 136, 106(b), and 106(c)) directing TSA to assess security technologies related to perimeter and access control security and develop a plan to provide technical (and funding) assistance to small- and medium- sized airport operators. TSA has not fully addressed these provisions or developed plans for how and when these requirements will be met. Some airport operators are currently testing or implementing security technologies independently, while others are waiting for TSA to complete its own technology assessments and issue guidance. In fiscal years 2002 and 2003, TSA worked with FAA to review and approve security-related Airport Improvement Program (AIP) grant applications for perimeter security and access control projects and other security-related projects. As we reported in October 2002, perimeter and access control security measures—fencing, surveillance and fingerprinting equipment, and access control systems—accounted for almost half of fiscal year 2002 AIP funding for security projects, as shown in table 2. In fiscal year 2003, FAA provided a total of $491 million for security- related AIP projects, including about $45.6 million for perimeter fencing projects and another $56.9 million for access control security, a total of about 21 percent of security funding. In addition, Congress appropriated a $175 million supplement to the program in January 2002 to reimburse 317 airports for post-September 11 security mandates. TSA said that FAA’s AIP served as its plan to provide the financial assistance to small and medium-sized airports required by Section 106(b) of ATSA. According to TSA, local federal security directors worked with FAA officials to review and approve security-related AIP grant applications submitted by individual airports, evaluating their merits on an airport-by-airport basis based on guidelines developed and provided by TSA. TSA has not, however, developed an approach to prioritize funding for perimeter and access control security projects at small- and medium- sized (or larger) airports. Without a plan to consider airports’ security needs systematically, including those of small- and medium-sized airports, TSA could not ensure that the most critical security needs of the commercial airport system were identified and addressed in a priority order. More importantly, because TSA has assumed primary responsibility for funding security-related projects, FAA’s AIP cannot continue to serve as TSA’s plan for providing financial assistance to small- and medium-sized airports. Without a plan, TSA could be less able to document, measure, and improve the effectiveness of the agency’s efforts to provide funding support for enhancing perimeter and access control security. While acknowledging the lack of a specific plan, TSA said the agency had, in conjunction with FAA, deployed and installed explosive detection systems, explosive trace detection and metal detection devices, and other security equipment at many small- and medium-sized airports for use by federal screeners at those airports and that over 300 small- and medium- sized airports had received technical support and equipment of some kind. However, in advising FAA throughout this process, TSA did not compile and analyze historical information on the cost and types of technology used or the specific airports receiving AIP assistance for perimeter and access control-related security enhancement projects (although TSA stated that historical data were available that could be used to conduct such analyses). FAA has historically maintained data on the uses of AIP funding (including the types of projects funded, amounts, and locations) in a commonly used commercial database system (Access). In addition, airport associations, such as the American Association of Airport Executives, also collect and disseminate information on the use of AIP funds for security enhancements. Without analyses of such historical information, TSA’s ability to establish a baseline of security funding for current and future planning efforts to enhance perimeter and access controls could be limited. In addition to consulting with FAA to provide funding for airport security projects through the AIP, TSA recently began providing security funding directly to airport operators. Specifically, in December 2003, TSA awarded approximately $8 million in grants to 8 airports as part of $17 million appropriated by Congress for enhancing the security of airport terminals, including access controls and perimeter security. Table 3 provides a brief description of the perimeter and access control security-related projects at the 8 airports TSA selected for funding. The Vision 100—Century of Aviation Reauthorization Act shifted most of the responsibility for airport security project funding from FAA and the AIP to TSA by establishing a new Federal Aviation Security Capital Fund in December 2003. Through the new fund, Congress authorized up to $500 million for airport security for each fiscal year from 2004 through 2007. Of the total, $250 million will be derived from passenger security fees, along with an additional authorization of up to $250 million. Of this amount, half of the money from each funding source is to be allocated pursuant to a formula that considers airport size and security risk. The other half would be distributed at the Under Secretary's discretion, with priority given to fulfilling intentions to obligate under letters of intent that TSA has issued. TSA said it is working on, but had not yet developed policies and procedures for, first, defining how the agency will fund and prioritize airport security projects under the new program or second, determining how much, if any, of the new funding will be used for perimeter security and access control projects. However, TSA said that the administration requested in its 2005 budget justification that Congress eliminate the allocation formula so that the agency could allocate funds according to a threat-based, risk assessment approach, regardless of the size of the airport. TSA has begun efforts to test commercially available and emerging security technologies to enhance perimeter and access control security. However, TSA has not yet fully addressed three ATSA requirements related to testing, assessing, recommending, and deploying airport security technologies and has not taken steps to otherwise compile and communicate the results of airport operators’ independent efforts to test and deploy security technologies. Two ATSA provisions required that TSA assess technologies for enhancing perimeter and access control security. The first provision (Sec. 136) required that TSA (1) recommend commercially available security measures or procedures for preventing access to secured airport areas by unauthorized persons within 6 months of the act’s passage and (2) develop a 12-month deployment strategy for commercially available security technology at the largest and busiest airports (category X). TSA has not explicitly addressed the requirements in this provision and did not meet the associated legislative deadlines. For example, TSA has not recommended commercially available technologies to improve surveillance and use of controls at access points by May 2002 or developed a deployment strategy. TSA said the agency failed to meet these deadlines because resources and management attention were primarily focused on meeting the many deadlines and requirements associated with passenger and baggage screening, tasks for which TSA has direct operational responsibility. The second technology provision of ATSA (Sec. 106(d)) requires that TSA establish a pilot program to test, assess, and provide information on new and emerging technologies for improving perimeter and access control security at 20 airports. TSA’s $20 million Airport Access Control Pilot Program is intended to assist the agency in developing minimum performance standards for airport security systems, assess the suitability of emerging security technologies, and share resulting information with airport operators and other aviation industry stakeholders. In October 2003, TSA selected a systems integrator to oversee the program and coordinate testing; however, the agency has not selected the specific technologies to be evaluated. TSA plans to look at four areas: biometric identification systems, new identification badges, controls to prevent unauthorized persons from piggybacking (following authorized airport workers into secured areas), and intrusion detection systems. TSA said the agency will conduct the technology assessments in two phases and that the second phase is scheduled to be completed by the end of 2005. However, TSA has not developed a plan describing the steps it will take once the program is completed, although TSA said the agency intends to communicate the results of both assessment phases to airport operators. TSA also said the agency will determine how to use results of the technology assessments and if it will issue any new security or performance standards to airports nationwide when both program assessment phases are completed. Without a plan that considers the potential steps the agency may need to take to effectively use the results of the pilot tests—for example, by issuing new standards—TSA’s ability to take effective and immediate steps once the program is completed could be limited. In addition to the pilot program, testing of a national credentialing system for workers in all modes of transportation—the Transportation Workers Identification Credential (TWIC) Program—is another effort that may help TSA address the requirement in Section 136 of ATSA related to testing and recommending commercially available security technologies to enhance perimeter and access control security. According to TSA, the program is intended to establish a uniform identification credential for 6 million workers who require unescorted physical or cyber access to secured areas of transportation facilities. The card is intended to combine standard background checks and new and emerging biometric technology so that a worker can be positively matched to his or her credential. According to TSA, the agency spent $15 million for the program in fiscal year 2003. In April 2003, TSA awarded a contract for $3.8 million to an independent contractor to assist TSA in the technology evaluation phase of the TWIC program and to test and evaluate different types of technologies at multiple facilities across different modes of transportation at pilot sites. Congress directed $50 million for the TWIC program for fiscal year 2004. This program is scheduled for completion in 2008. We have a separate review under way looking at TSA’s TWIC pilot testing at maritime ports and expect to report to the Senate Commerce Committee later this year. Airport operators and aviation industry associations identified a number of operational issues that they said need to be resolved for the TWIC card to be feasible. For example, they said the TWIC card would have to be compatible with the many types of card readers used at airports around the country, or new card readers would have to be installed. At large airports, this could entail replacing hundreds of card readers, and airport representatives have expressed concerns about how this effort would be funded. According to TSA, however, the TWIC card is intended to be compatible with all airports’ card readers. Nonetheless, TSA has not yet conducted an analysis of the cost and operational impacts of implementing the program at airports nationwide. TSA said it intends to gather additional information needed to conduct such an analysis at some point in the future. The third provision of ATSA related to technology (Sec. 106(b)) requires that TSA develop a plan to provide technical (and funding) support to small- and medium-sized airports. TSA had not developed such a plan. As discussed earlier, TSA said that FAA’s AIP was the agency’s effort to meet this provision. However, this was an FAA plan and did not fully meet the requirement. More importantly, because the amount of money coming from the AIP for security-related projects will be significantly reduced, and thereby TSA’s continuing in involvement with FAA in administering the program, the AIP cannot continue to serve as TSA’s plan for providing technical assistance to small- and medium-sized airports. Without a plan, TSA could be less able to document, measure, and improve the effectiveness of the agency’s efforts to provide technical support for enhancing perimeter and access control security. We contacted airport operator officials responsible for security at the nation’s 21 largest and busiest U.S. commercial airports to obtain their views on the need for technical guidance from TSA to enhance the security of perimeters and access controls. Some airport operators said they were waiting for TSA to complete its technology assessments before enhancing perimeter and access control security, while other airport operators were independently testing and deploying security technologies. Officials at these airports said they are waiting for TSA to provide guidance before proceeding with security upgrades. These airport operators also said that security technology is very costly, and they cannot afford to pay for testing technology prior to purchasing and installing such technology at their airports. They said that information or guidance from TSA about what technologies are available or most effective to safeguard airport perimeters would be beneficial. Conversely, officials at other airports also said they were assessing what is needed to improve their perimeter security and access controls by independently testing and installing security technologies. Several of these officials said that the trial- and-error approach to improving security would not be necessary if TSA would act as a clearinghouse for information on the most effective security technologies and how they can be applied. They said that their independent efforts did not always ensure that increasingly limited resources for enhancing security were used in the most effective way. In addition to contacting the 21 largest and busiest airports, we identified 13 other airports as examples of airports that have tested or implemented technologies for improving airport perimeter and access control security. Figure 3 shows where various perimeter and access control security technologies were being tested at the time of our review or had been implemented at selected commercial airports across the nation. While some independent efforts have been successful in identifying effective security technologies, others have been less successful. For example, one airport operator said it contracted with a private technology vendor to install identity authentication technology to screen documents presented by job applicants. The airport completed a 5-month pilot program in the fall of 2002 and subsequently purchased two workstations to implement the technology at the airport at a cost of $130,000. Another airport operator conducted an independent pilot program in 2002 to test a biometric recognition system in order to identify airport workers. The system compared 15 airport workers against a database of 250 airport workers, but operated at a high failure rate. Although compiling information on this pilot test and other airports’ efforts would augment TSA’s own efforts to assess technology, TSA has not considered the costs and benefits of compiling and assessing the information being collected through these independent efforts. TSA agreed that compiling such data could be beneficial, but the agency had not yet focused its attention on gathering data to generate useful information on such independent testing efforts. Without taking steps to collect and disseminate the results of these independent airport operator efforts to test and deploy security technologies, TSA could miss opportunities to enhance its own testing activities, as well as help other airport operators avoid potentially costly and less effective independent test programs. TSA has taken steps to increase measures to reduce the potential security risks posed by airport workers, but it has not addressed all of the requirements in ATSA related to background checks, screening, security training, and vendor security programs or developed plans that describe the actions they intend to take to fully address these requirements. For example, TSA required criminal history records checks and security awareness training for most, but not all, the airport workers called for in ATSA (Secs. 138(a)(8) and 106(e), respectively). Finally, TSA does not require airport vendors with direct access to the airfield and aircraft to develop security programs, which would include security measures for vendor employees and property, as required by ATSA (Sec. 106(a)). TSA cited resource, regulatory, and operational concerns associated with performing checks on additional workers, and providing additional training, as well as the potentially significant costs to vendors to establish and enforce independent security programs. However, TSA had not yet completed analyses to quantify these costs, determine the extent to which the industry would oppose regulatory changes, or determine whether it would be operationally feasible for TSA to monitor implementation of such programs. TSA requires most airport workers who perform duties in secured and sterile areas to undergo a fingerprint-based criminal history records check, and it requires airport operators to compare applicants’ names against TSA’s aviation security watch lists. Once workers undergo this review, they are granted access to airport areas in which they perform duties. For example, those workers who have been granted unescorted access to secured areas are authorized access to these areas without undergoing physical screening for prohibited items (which passengers undergo prior to boarding a flight). To meet TSA requirements, airport operators transmit applicants’ fingerprints to a TSA contractor, who in turn forwards the fingerprints to TSA, who submits them to the FBI to be checked for criminal histories that could disqualify an applicant for airport employment. TSA also requires that airport operators verify that applicants’ names do not appear on TSA’s “no fly” and “selectee” watch lists to determine whether applicants are eligible for employment. According to TSA, all airport workers who have unescorted access to secured airport areas—approximately 900,000 individuals nationwide— underwent a fingerprint-based criminal history records check and verification that they did not appear on TSA’s watch lists by December 6, 2002, as required by regulation. In late 2002, TSA required airport operators to conduct fingerprint-based checks and watch list verifications for an additional approximately 100,000 airport workers who perform duties in sterile areas. As of April 2004, TSA said that airport operators had completed all of these checks. To verify that required criminal checks were conducted, we randomly sampled airport employee files at 9 airports we visited during our review and examined all airport employee files at a 10th airport. Based on our samples, we estimate that criminal history record checks at 7 of the airports were conducted for 100 percent of the airport employees. In the other 2 airports in which samples were conducted, we estimate that criminal history checks were conducted for 98 percent and 96 percent of the airport workers. At the 10th airport, we examined all airport employee files. We found that criminal history checks were conducted for 93 percent of the airport employees there. Although airport operators could not provide documentation that the checks were conducted in a small number of cases, airport security officials said that no individuals were granted access to secured or sterile areas without the completion of such a check. TSA said that verification of airport compliance with background check requirements was a standard part of airport compliance inspections. For example, according to TSA, the agency conducted criminal history records check verification inspections at 103 airports between October 1, 2003, and February 9, 2004, and found that the airports were in compliance about 99 percent of the time. TSA does not require airport workers who need access to secured areas from time to time (such as construction workers), and who must be regularly escorted, to undergo a fingerprint check or scan against law enforcement databases, even though such checks are also required by ATSA (Sec. 138(a)(6)). Although TSA does not require that airport operators conduct these checks, TSA drafted a proposed rule in 2002 to require checks on individuals escorted in secured areas. The draft rule also set forth minimum standards for providing escorts for these individuals. In a February 2003 report on TSA’s efforts to enhance airport security, the Department of Transportation Inspector General recommended that TSA revise its proposed rule to enhance the security benefits that the new rule could provide by including (1) additional background check requirements, (2) a more specific description of escort procedures, and (3) a clarification on who would be exempt from such requirements. However, at the time of our review, TSA had not addressed these recommendations, issued the proposed rule, or developed a schedule for conducting and completing the rule making process. According to TSA, the agency plans to proceed with its rule making to address background checks for those who have regularly escorted access, and, in consultation with DHS and the Office of Management and Budget, has included this rule making as part of a priority list of 20 rule makings that the agency plans to initiate in the next 12 months. While TSA has taken steps to conduct fingerprint-based checks for airport employees who work in secured and sterile areas, certain factors limit the effectiveness of these checks. For example, fingerprint-based checks only identify individuals with fingerprints and a criminal record on file with the FBI’s national fingerprint database. Limitations of these checks were highlighted by recent multifederal agency investigations, which found that thousands of airport workers falsified immigration, Social Security, or criminal history information to gain unescorted access to secured and sterile airport areas. In some of these cases, airport workers who had provided false information to obtain unescorted access underwent a fingerprint-based check and passed. TSA noted that the federal government had not yet developed a system that would allow interagency database searches to provide access to social security and immigration information. Another limitation with TSA’s process for conducting background checks on airport workers is that fingerprint checks do not include a review of, among other things, all available local (county and municipal) criminal record files. As a result, an individual could pass the fingerprint check although he or she had a local criminal record. TSA officials did not consider the lack of a local criminal records check to be a limiting factor because local criminal records are not likely to include any of the 28 criminal convictions that would disqualify an individual from obtaining unescorted access to secured airport areas. According to TSA, local criminal files do not include the more serious crimes such as murder, treason, arson, kidnapping, and espionage that are listed in state and federal criminal databases. Further, several airport operator officials we spoke with expressed concern about cases in which individuals had committed disqualifying criminal offenses and were ultimately granted access to secured areas because federal law (and TSA’s implementing regulation) disqualifies an individual only if he or she has been convicted of an offense within 10 years of applying for employment at the airport. Others said that a few disqualifying criminal offenses, such as air piracy, warranted a lifetime rather than a 10-year ban on employment in secured airport areas. Also, current regulation requires that airport workers must report if they are convicted of a crime after the initial criminal check is conducted and surrender their security identification badges within 24 hours of their conviction. In addressing the issue of background checks in May 2003, the Department of Transportation’s Inspector General issued a statement supporting random recurrent background checks. TSA recognizes the potential limitations of current fingerprint check requirements and has taken steps to improve the process. For example, in 2002, TSA began conducting an additional two-part background check consisting of a name-based FBI National Crime Information Center (NCIC) check and a terrorist link analysis against selected terrorism databases for the approximately 100,000 airport workers who perform duties in sterile areas. TSA said it expanded the background check process for these workers because it believed that the cost was more feasible for airport operators to bear, given these workers represent a significantly smaller population than workers who have unescorted access to secured areas. TSA used the NCIC database, a computerized index of documented criminal justice information, to conduct a criminal history record check that compares an individual’s name against 19 nationwide criminal history lists. The terrorist link analysis determines whether an airport worker is known to pose a potential terrorist threat. TSA officials noted that the terrorist link analysis could identify personal information on airport employment applications, among other things, thus improving the current background check process. TSA faces challenges in expanding the scope and frequency of current background check requirements to include additional airport workers and more extensive background checks. In terms of expanding background checks to include airport workers who have regularly escorted access to secured areas, TSA said that determining how many workers are regularly escorted in secured airport areas is a challenge because these individuals (such as construction workers) enter the airport on an infrequent and unpredictable basis. TSA said airport officials could not easily determine how many workers are regularly escorted in secured areas and which workers would warrant a background check. TSA had not conducted any sampling or other analysis efforts to attempt to determine how many workers this might include. In terms of expanding the scope of current background check requirements to include more extensive checks on airport workers who have unescorted access to secured areas, TSA cited the time needed to establish regulatory requirements for the more extensive checks and the potential costs of conducting the checks as challenges. In contrast, to reduce the security risk associated with federal airport screeners, TSA conducts far more extensive checks before providing screeners the same level or lower levels of airport access. The agency supports conducting the expanded checks for all commercial aviation workers and estimated that the cost to perform fingerprint-based criminal history records checks for all secured and sterile area workers nationwide has been approximately $60 million to $80 million (or about $60 to $80 for each of the approximately 1 million secured and sterile area workers). TSA had not estimated the costs of applying additional checks to all airport workers. In addition, TSA stated that increasing the frequency of background checks would also increase costs to airport operators. However, TSA had not developed a specific cost analysis to assess the costs of expanding the scope and frequency of the checks or whether the additional security provided by taking such steps would warrant the additional costs. TSA said the agency is considering alternatives for how these additional checks would be funded. TSA also said that requiring airport workers themselves to pay for a portion of the background check, which is a common practice at some airports, could help to fund these additional checks. In recognition of the potential security risk posed by airport workers, TSA said the agency was weighing the costs and security benefits of expanding the scope and frequency of current background check requirements to include additional airport workers, as well as more extensive checks. However, TSA has not yet established a plan outlining how and when it will do so. For example, TSA has not yet proposed specific analyses to support its decision making or a schedule describing when it plans to decide this issue. TSA has different requirements for screening airport workers. For sterile area workers, TSA requires, among other things, that they be screened at the checkpoint. According to TSA’s Office of Chief Counsel, TSA intended that sterile area workers be required to enter sterile areas through the passenger-screening checkpoint and be physically screened. However, airport officials, with the FSD’s approval, may allow sterile area workers to enter sterile areas through employee access points or may grant them unescorted access authority and SIDA badges. TSA does not require airport workers who have been granted unescorted SIDA access to be physically screened for prohibited items when entering secured areas. According to TSA, the agency relies on its fingerprint-based criminal history records check as a means of meeting the ATSA requirement that all individuals entering secured areas at airports be screened and that the screening of airport workers provides at least the same level of protection that results from physical screening of passengers and their baggage. However, as previously noted, there are limitations with the scope and effectiveness of the background check process. TSA acknowledged that physically screening airport workers for access to secured areas could increase security, but it cited challenges such as the need (and associated costs) for more screening staff and increased passenger delays. Although TSA said fingerprint checks are a more economically feasible alternative, the agency had not conducted analyses to determine the actual costs, assessed the potential operational delays that could occur, or the reduction of the risk posed by airport workers that physical screening would provide. However, in October 2002, TSA conducted an analysis of threats posed by airport workers with access to secured areas, and one recommendation in the resulting report was to require airport operators to conduct random physical screening of workers entering secured areas. TSA elected not to adopt this recommendation because of what it characterized as the cost and operational difficulties in physically screening workers. However, TSA did not gather or analyze data from airports to substantiate its claim. Some airport operator officials we contacted agreed with TSA that physically screening workers prior to entering secured areas would be costly and difficult. For example, some airport operator officials said physical screening of these airport workers would result in increased staffing costs and longer wait times for passengers at passenger-screening checkpoints, or could require screening airport workers at a location separate from passengers to avoid passenger delays. In addition to the operational difficulty of physically screening each worker, TSA and airport operators noted that some airport workers must use prohibited items (such as box cutters and knives) to perform their job functions, and monitoring which workers are allowed to carry such items could be difficult. Also, these prohibited items would still be available to workers who wished to use them to cause harm even after they had been physically screened. At one airport we visited, airport workers who have access to secured areas are required to undergo physical screening when they arrive at work through centralized employee-screening checkpoints but are not screened when they subsequently enter secured areas through other access points. TSA has not estimated the cost associated with requiring physical screening of secured area airport workers, although airport operators and industry associations believe the cost would be significant. While TSA is weighing the security benefits of requiring physical screening of workers who have access to secured airport areas against the associated costs, the agency has yet to determine whether such requirements will be established. According to TSA, screening in the form of enhanced background checks on all airport workers—checks that would investigate Social Security information, immigration status, and links to terrorism— would, if instituted, further ensure that airport workers were trustworthy and reduce risk, if not the need to physically screen workers. However, TSA has not developed a plan defining when and how the agency will determine whether it will institute these expanded checks or if physically screening airport workers who need access to secured areas is ultimately necessary and feasible. ATSA, (Sec. 106(e)), mandates that TSA require airport operators and air carriers to develop security awareness training programs for airport workers such as ground crews, and gate, ticket, and curbside agents of air carriers. However, while TSA requires such training for these airport workers if they have unescorted access to secured areas, the agency does not require training for airport workers who perform duties in sterile airport areas. According to TSA, training requirements for these airport workers have not been established because additional training would result in increased costs for airport operators. Nonetheless, officials at some airports we visited said that the added cost is warranted and have independently required security training for their airport employees that work in sterile areas to increase awareness of their security responsibilities. Among other things, security training teaches airport workers their responsibility to challenge suspicious persons who are not authorized to be in secured areas (an area included in TSA airport covert testing programs). Some airport operator officials said they also used challenge reward programs, whereby airport workers are given rewards for challenging suspicious persons or individuals who are not authorized to be in secured areas, as a way of reinforcing security awareness training. Many airport operator officials we spoke with were concerned that security training for airport workers in secured areas is not required by TSA regulations on a recurrent basis, an issue previously raised by the Department of Transportation’s Inspector General. TSA also agreed that recurrent training could be beneficial in raising the security awareness of airport workers. Although recurrent training is not required by ATSA or by TSA regulation, a federal law does require recurrent security training for the purpose of improving secured area access controls. Other airport operators independently provide recurrent training for individuals who demonstrate a lack of security awareness. TSA has acknowledged the value of recurrent training for its own workforce. We previously identified that training for TSA employees— airport screeners—should be recurrent, and TSA said it is developing a recurrent training program for its screening workforce to aid in maintaining security awareness, among other things. At the time of our review, TSA said it was considering the benefits of expanding the scope and frequency of security training against the associated costs in time and money to airport operators and businesses. However, TSA had not developed a plan or schedule for conducting the analyses needed to support its decision making or projected when a decision might be made. TSA has not issued a regulation requiring airport vendors (companies doing business in or with the airport) with direct access to the airfield and aircraft to develop a security program, as required by ATSA (Sec. 106(a)). TSA had not developed an estimate of the number of airport vendors nationwide, although TSA officials said the number could be in the thousands. As an example, security officials at an airport we visited said that over 550 airport vendors conducted business in or with the airport. According to TSA, existing airport security requirements address the potential security risks posed by vendors and their employees. For example, vendor employees that perform duties in secured or sterile areas are required to undergo a fingerprint-based criminal history records check, just as other airport workers are and are prevented by access controls from entering secured airport areas if they are not authorized to do so. However, as discussed above, fingerprint-based criminal history records checks may have limitations. Many airport operator and airport association officials we spoke with said that requiring vendors to develop their own security program would be redundant because the airport’s security program comprises all aspects that a vendor program would include, such as requirements for employee security training, procedures for challenging suspicious persons, background checks, monitoring and controlling employee identification badges, and securing equipment and vehicles. In addition, some said such a requirement would also place a financial and administrative burden on vendors doing business at the airport, particularly the smaller ones, to develop and update such programs. Two airport vendors we spoke with said that developing security programs could be costly, time-consuming, and require the use of a consultant with the necessary security expertise to develop such a plan. In addition, vendors said that airport operators are in the best position and have the necessary expertise to determine security policies for all workers, including vendors, working at the airport. According to TSA, requiring vendors to develop and maintain their own security programs would also present a resource challenge to TSA’s inspection staff. In addition to conducting reviews of airport operator and air carrier compliance with federal security regulations, the already understaffed inspection workforce would also have to determine a way to review vendor security programs and enforce any violations. According to TSA, the process of reviewing the programs and verifying implementation of the program’s provisions could require visits to thousands of different vendor locations spread throughout the United States. Despite these challenges, TSA said the agency is considering the costs, benefits, and feasibility of issuing a regulation that would require airport vendors to develop security programs in order to meet the requirements in ATSA. TSA said that it has formed a working group to consider the best approach to take, and this group could become the core of any future rule-making team if necessary. However, the agency has not developed a plan detailing when this analysis will be complete or when any decisions about whether to issue a new rule will be made. During its first 2 years, TSA assumed a wide variety of responsibilities to ensure that airport perimeter and access controls are secure and that the security risks posed by airport workers are reduced. Given the range of TSA’s responsibilities and its relative newness, it is understandable that airport security evaluations remain incomplete and that some provisions of ATSA—which pose operational and funding challenges—have not been met. TSA has begun efforts to evaluate the security environments at airports, fund security projects and test technologies, and reduce the risks posed by airport workers. However, these efforts have been in some cases fragmented rather than cohesive. As a result, TSA has not yet determined how it will address the resource, regulatory, and operational challenges the agency faces in (1) identifying security weaknesses of the commercial airport system as a whole, (2) prioritizing funding to address the most critical needs, or (3) taking additional steps to reduce the risks posed by airport workers. Without a plan to address the steps it will take to fulfill the wide variety of security oversight responsibilities the agency has assumed in the area of perimeter and access control security, TSA will be less able to justify its resource needs and clearly identify its progress in addressing requirements in ATSA and associated improvements in this area of airport security. Such a plan would also provide a better framework for Congress and others interested in holding TSA accountable for the effectiveness of its efforts. To help ensure that TSA is able to articulate and justify future decisions on how best to proceed with security evaluations, fund and implement security improvements—including new security technologies—and implement additional measures to reduce the potential security risks posed by airport workers, we recommend that the Secretary of Homeland Security direct TSA’s Administrator to develop and provide Congress with a plan for meeting the requirements of ATSA. In addition, at a minimum, we recommend the following four actions be addressed: Establish schedules and an analytical approach for completing compliance inspections and vulnerability assessments for evaluating airport security. Conduct assessments of technology, compile the results of these assessments as well as assessments conducted independently by airport operators, and communicate the integrated results of these assessments to airport operators. Use the information resulting from the security evaluation and technology assessment efforts cited above as a basis for providing guidance and prioritizing funding to airports for enhancing the security of the commercial airport system as a whole. Determine, in conjunction with aviation industry stakeholders, if and when additional security requirements are needed to reduce the risks posed by airport workers and develop related guidance, as needed. We provided a draft copy of this report to the Department of Homeland Security and the Transportation Security Administration for their review and comment. TSA generally concurred with the findings and recommendations in the report and provided formal written comments that are presented in appendix III. These comments noted that TSA has started to, or plans to, implement many of the actions we recommended. TSA also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to appropriate congressional committees; the Secretary, DHS; the Secretary, DOT; the Director of Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or at berrickc@gao.gov or Chris Keisling, Assistant Director, at (404) 679-1917 or at keislingc@gao.gov. Key contributors to this report are listed in appendix IV. To assess the Transportation Security Administration’s (TSA) efforts to (1) evaluate the security of airport perimeters and the controls that limit access into secured airport areas, (2) help airports implement and enhance perimeter security and access controls by providing funding and technical guidance, and (3) implement measures to reduce the potential security risk posed by airport workers, we reviewed pertinent legislation (the Aviation and Transportation Security Act, or ATSA), regulatory requirements, and policy guidance. We discussed specific ATSA requirements related to Sections 106, 136, and 138, which address perimeter and access control security, as well as strengthening requirements for airport workers, with our Office of General Counsel to determine to what extent TSA had met these requirements. We limited our review of TSA’s efforts to test, assess, and deploy security technologies as it related to provisions in Sections 106 and 136 of ATSA. We also obtained and analyzed TSA data on security breaches, inspections of airport compliance with security regulations, and vulnerability assessments. (TSA’s covert testing data and information on the test program is classified and is the subject of a separate GAO report.) We discussed the threat scenarios used in TSA vulnerability assessments with TSA officials to identify those related to perimeter and access control security. We also obtained and analyzed data from the Federal Aviation Administration (FAA) and TSA on perimeter and access control-related security funds distributed to commercial airports nationwide. We also reviewed reports on aviation security issued previously by us and the Department of Transportation Inspector General. We discussed the reliability of TSA’s airport security breach data for fiscal years 2001, 2002, and 2003 (through October); vulnerability assessment data for 2003; and compliance inspection data for fiscal years 2002, 2003, and 2004 (to February) with TSA officials in charge of both efforts. Specifically, we discussed methods for inputting, compiling, and maintaining the data. In addition, we reviewed reports related to TSA’s compliance reviews and vulnerability assessments to determine the results and identify any inconsistencies in the data. Subsequently, no inconsistencies were found, and we determined that the data provided by TSA were sufficiently reliable for the purposes of our review. In addition, we conducted site visits at 12 commercial airports (8 category X, 1 category I, 1 category II, 1 category III, and 1 category IV) to observe airport security procedures and discuss issues related to perimeter and access control security with airport officials. Airports we visited were Boston Logan International Airport, Atlanta Hartsfield Jackson International Airport, Ronald Reagan Washington National Airport, Washington Dulles International Airport, Orlando International Airport, Tampa International Airport, Miami International Airport, Los Angeles International Airport, San Francisco International Airport, Middle Georgia Regional Airport, Chattanooga Metropolitan Airport, and Columbus Metropolitan Airport. We chose these airports on the basis of several factors, including airport size, geographical dispersion, and airport efforts to test and implement security technologies. We also conducted semistructured interviews with airport security coordinators at each of the 21 category X airports to discuss their views on perimeter and access control security issues. In addition, we contacted or identified 13 other airports that had tested or implemented perimeter and access control security technologies. We reviewed a random sample of 838 airport workers at 10 of the 12 airports we visited (categories X, I, and II) where workers were indicated as having a fingerprint-based criminal history records check in calendar year 2003 to verify that these workers had undergone the check. We did not conduct a records review at the category III and IV commercial airports we visited. We randomly selected probability samples from the study populations of airport workers who underwent a fingerprint-based criminal history record check in the period between January 1, 2003, and the date in which we selected our sample or December 31, 2003, whichever was earlier. With these probability samples, each member of the study populations had a nonzero probability of being included, and that probability could be computed for any member. Each sample element selected was subsequently weighted in the analysis to account statistically for all the members of the population at each airport. Because we followed a probability procedure based on random selections at each airport, our samples are only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular samples’ results as 95 percent confidence intervals (e.g., plus or minus 7 percentage points). These are the intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the respective study populations. Further, we interviewed TSA headquarters officials in Arlington, Virginia, and from the Office of Internal Affairs and Program Review, Office of Aviation Operations, Office of Chief Counsel, Credentialing Program Office, Office of Aviation Security Measures, and officials from the Office of Technology in Atlantic City, New Jersey, to discuss the agency’s efforts to address perimeter and access control security. We also spoke with officials from two aviation industry associations—the American Association of Airport Executives and Airports Council International—to obtain their views on the challenges associated with improving perimeter and access control security. We also interviewed airport vendors to determine the need and feasibility of requiring all vendors to develop their own security programs. We conducted our work between June 2003 and March 2004 in accordance with generally accepted government auditing standards. Risk management is a systematic and analytical process to consider the likelihood that a threat will endanger an asset, an individual, or a function and to identify actions to reduce the risk and mitigate the consequences of an attack. Risk management principles acknowledge that while risk cannot be eliminated, enhancing protection from existing or potential threats can help reduce it. Accordingly, a risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions. The purpose of this approach is to link resources with efforts that are of the highest priority. Figure 4 describes the elements of a risk management approach. Figure 5 illustrates how the risk management approach can guide decision making and shows that the highest risks and priorities emerge where the three elements of risk management overlap. For example, an airport that is determined to be a critical asset, vulnerable to attack, and a likely target would be at most risk and, therefore, would be a higher priority for funding compared with an airport that is only vulnerable to attack. In this vein, aviation security measures shown to reduce the risk to the most critical assets would provide the greatest protection for the cost. According to TSA, once established, risk management principles will drive all decisions—from standard setting to funding priorities and to staffing. TSA has not yet fully implemented its risk management approach, but it has taken steps in this direction. Specifically, TSA’s Office of Threat Assessment and Risk Management is in various stages of developing four assessment tools that will help assess threats, criticality, and vulnerabilities. TSA plans to fully implement and automate its risk management approach by September 2004. Figure 6 shows TSA’s threat assessment and risk management approach. The first tool, which will assess criticality, will determine a criticality score for a facility or transportation asset by incorporating factors such as the number of fatalities that could occur during an attack and the economic and sociopolitical importance of the facility or asset. This score will enable TSA, in conjunction with transportation stakeholders, to rank facilities and assets within each mode and thus focus resources on those that are deemed most important. TSA is working with another Department of Homeland Security (DHS) office—the Information and Analysis Protection Directorate—to ensure that the criticality tool will be consistent with DHS’s overall approach for managing critical infrastructure. A second tool—the Transportation Risk Assessment and Vulnerability Tool (TRAVEL)—assesses threats and analyzes vulnerabilities at those transportation assets TSA determines to be nationally critical. The tool is used in a TSA-led and -facilitated assessment that will be conducted on the site of the transportation asset. The facilitated assessments typically take several days to complete and are conducted by TSA subject matter experts, along with airport representatives such as operations management, regulatory personnel, security personnel, and law enforcement agents. Specifically, the tool assesses an asset’s baseline security system and that system’s effectiveness in detecting, deterring, and preventing various threat scenarios, and it produces a relative risk score for potential attacks against a transportation asset or facility. Established threat scenarios contained in the TRAVEL tool outlines a potential threat situation including the target, threatening act, aggressor type, tactic/dedication, contraband, contraband host, and aggressor path. In addition, TRAVEL will include a cost-benefit component that compares the cost of implementing a given countermeasure with the reduction in relative risk to that countermeasure. TSA is working with economists to develop the cost-benefit component of this model and with the TSA Intelligence Service to develop relevant threat scenarios for transportation assets and facilities. According to TSA officials, a standard threat and vulnerability assessment tool is needed so that TSA can identify and compare threats and vulnerabilities across transportation modes. If different methodologies are used in assessing the threats and vulnerabilities, comparisons could be problematic. However, a standard assessment tool would ensure consistent methodology. A third tool—the Transportation Self-Assessment Risk Module (TSARM)—will be used to assess and analyze vulnerabilities for assets that the criticality assessment determines to be less critical. The self- assessment tool included in TSARM will guide a user through a series of security-related questions in order to develop a comprehensive security baseline of a transportation entity and will provide mitigating strategies for use when the threat level increases. For example, as the threat level increases from yellow to orange, as determined by DHS, the assessment tool might advise an entity to take increased security measures, such as erecting barriers and closing selected entrances. TSA had deployed one self-assessment module in support of targeted maritime vessel and facility categories. The fourth risk management tool that TSA is currently developing is the TSA Vulnerability Assessment Management System (TVAMS). TVAMS is TSA’s intended repository of criticality, threat, and vulnerability assessment data. TVAMS will maintain the results of all vulnerability assessments across all modes of transportation. This repository will provide TSA with data analysis and reporting capabilities. TVAMS is currently in the conceptual stage and requirements are still being gathered. In addition to those named above, Leo Barbour, Amy Bernstein, Christopher Currie, Dave Hooper, Thomas Lombardi, Sara Ann Moessbauer, Jan Montgomery, Steve Morris, Octavia Parks, Dan Rodriguez, and Sidney Schwartz were key contributors to this report.
In the 2 years since passage of the Aviation and Transportation Security Act (ATSA), the Transportation Security Administration (TSA) has primarily focused its efforts on improving aviation security through enhanced passenger and baggage screening. The act also contained provisions directing TSA to take actions to improve the security of airport perimeters, access controls, and airport workers. GAO was asked to assess TSA's efforts to: (1) evaluate the security of airport perimeters and the controls that limit access into secured airport areas, (2) help airports implement and enhance perimeter security and access controls by providing them funding and technical guidance, and (3) implement measures to reduce the potential security risks posed by airport workers. TSA has begun evaluating the security of airport perimeters and the controls that limit access into secured airport areas. Specifically, TSA is conducting compliance inspections and vulnerability assessments at selected airports. These evaluations--though not complete--have identified perimeter and access control security concerns. While TSA officials acknowledged that conducting these airport security evaluations is essential to identifying additional perimeter and access control security measures and prioritizing their implementation, the agency has not determined how the results will be used to make improvements to the entire commercial airport system. TSA has helped some airport operators enhance perimeter and access control security by providing funds for security equipment, such as electronic surveillance systems. TSA has also begun efforts to evaluate the effectiveness of security-related technologies, such as biometric identification systems. However, TSA has not begun to gather data on airport operators' historical funding of security projects and current needs to aid the agency in setting funding priorities. Nor has TSA developed a plan for implementing new technologies or balancing the costs and effectiveness of these technologies with the security needs of individual airport operators and the commercial airport system as a whole. TSA has taken some steps to reduce the potential security risks posed by airport workers. However, TSA had elected not to fully address all related ATSA requirements. In particular, TSA does not require fingerprint-based criminal history checks and security awareness training for all airport workers, as called for in ATSA. Further, TSA has not required airport vendors to develop security programs, another ATSA requirement. TSA said expanding these efforts would require a time-consuming rulemaking process and impose additional costs on airport operators. Finally, although not required by ATSA, TSA has not developed a plan detailing when and how it intends to address these challenges.
Under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), which created the Superfund program in 1980, the Environmental Protection Agency (EPA) assesses uncontrolled hazardous waste sites and places those posing the greatest risks to human health and the environment on the National Priorities List (NPL) for cleanup. As of September 1995, this list included 1,232 sites. Cleanup standards and the degree of cleanup needed for Superfund sites are discussed in section 121(d) of the CERCLA statute, as amended by the Superfund Amendments and Reauthorization Act of 1986 (SARA). This section requires that Superfund sites be cleaned up to the extent necessary to protect both human health and the environment. In addition, cleanups must comply with requirements under federal environmental laws that are legally “applicable” or “relevant and appropriate” (ARAR) as well as with such state environmental requirements that are more stringent than the federal standards. Furthermore, Superfund cleanups must at least attain levels established under the Safe Drinking Water Act and the Clean Water Act, where such standards are relevant and appropriate as determined by the potential use of the water and other considerations. The federal standards most frequently considered relevant and appropriate for groundwater cleanups at Superfund sites are set under the Safe Drinking Water Act. This act establishes standards, called maximum contaminant levels (MCL), for certain contaminants in water delivered by public drinking water systems. As of March 1996, the MCLs included numeric limits on about 70 contaminants. The MCLs take into account estimates of the human health risks posed by contaminants. They also consider whether it is technically and economically feasible to reduce the contamination to a level that no longer poses a health risk. Although MCLs are legally applicable to drinking water systems, section 121(d) of CERCLA generally requires that they be considered relevant and appropriate standards for cleaning up contaminated groundwater that is a potential source of drinking water. For example, the MCL for benzene is 5 micrograms per liter. This concentration would generally be the cleanup level for benzene in groundwater that is a potential source of drinking water unless the state has promulgated a more stringent standard or other requirement that is relevant and appropriate. There are few federal standards for contaminants in soil that are considered potentially applicable or relevant and appropriate except those for certain highly toxic contaminants, most notably polychlorinated biphenyls (PCB) and lead. Under the Toxic Substances Control Act, EPA sets requirements for cleaning up PCB contamination. In addition, EPA has issued guidance for cleaning up lead in soil. Early in its investigation of a site, EPA determines, on the basis of the contamination present and the conditions at the site, which chemical-specific and other standards may be considered applicable or relevant and appropriate. As EPA proceeds with the selection of a cleanup method, it adjusts the list of standards to be considered on the basis of information gained during its investigation. Among the potential standards considered are any state environmental standards that are more stringent than the federal standards for the same contaminants. In addition to numeric standards for specific contaminants, some states have set more generalized standards or policies that may have to be considered when cleaning up Superfund sites. For example, some states have established “antidegradation” policies for groundwater that could require more stringent cleanups than cleanups based on health risks. These policies are intended, among other things, to protect the state’s groundwater as a potential source of drinking water. If federal or state standards do not exist for a given contaminant, the party responsible for cleaning up a Superfund site may use a site-specific risk assessment to help establish a cleanup level for that contaminant. A risk assessment evaluates the extent to which people may be exposed to the contaminant, given its concentration and the physical characteristics of the site. For example, the type of soil and the depth of the groundwater may affect whether and how quickly waste will migrate and reach a population. A risk assessment uses exposure and toxicity data to estimate the increased probability, or risk, that people could develop cancer or other health problems through exposure to this contamination. A risk estimate can be used along with the proposed waste management strategy to help determine the extent of the cleanup necessary at a site. EPA has published guidance for conducting risk assessments, a set of documents referred to collectively as the Risk Assessment Guidance for Superfund. These documents outline the well-established risk assessment principles and procedures that can be used to gather and assess information on human health risks. The documents also include information on mathematical models that can be used to estimate health risks at a site, given the contaminants present and the means of exposure to them. In addition to this guidance, EPA maintains an Integrated Risk Information System (IRIS), an on-line database on the toxicity of numerous chemicals, and publishes the Health Effects Assessment Summary Tables (HEAST), another source of information on contaminants’ toxicity. EPA uses this guidance in conducting baseline risk assessments at Superfund sites, which it uses in deciding whether the human health and environmental risks posed by the contaminants are serious enough to warrant cleaning up the sites. Some states also use EPA’s risk assessment guidance in setting their standards for specific chemicals. States that have set environmental standards have made decisions about what levels, or concentrations, of chemical contaminants can remain at hazardous waste sites after cleanups. We analyzed the processes that the states in our survey said they went through, as well as the factors that they said they took into consideration, in developing their soil and groundwater standards. In this section, we first summarize (1) the extent to which the states based their soil standards on estimates of the human health risks posed by contaminants at the sites and (2) the methods that the states used to estimate these risks. We then report on the factors other than health risks that the states said they considered when developing their soil standards. Since the bases for the states’ standards for groundwater differed somewhat from those for soil, we summarized the information on groundwater standards separately. Finally, since federal drinking water standards are frequently used as cleanup standards for groundwater, we compared the states’ groundwater standards to the federal standards for the same contaminants to determine the extent of their correspondence. We have included the information we obtained from the 33 states in our survey. In all, 21 of the 33 states had set their own standards for either soil or groundwater, or for both media. (See table 2.1.) Thirteen of the 21 states had set their own soil standards, and 20 had set some groundwater standards that were in addition to or different from the MCLs for drinking water, as discussed in the remainder of this section. All 13 of the states with soil standards indicated that they considered risks to human health when developing their standards. The number of chemical-specific standards per state ranged from about 10 to nearly 600. All but one of these states generally relied on EPA’s guidance for estimating health risks from contaminants (Missouri had developed its soil standards before EPA issued its guidance). These states said that they had used EPA’s guidance, either alone or in combination with their own methodologies and policies, to estimate health risks. (See table 2.2.) For example, Pennsylvania said that it had used EPA’s guidance to estimate the toxicity of contaminants and its own model to estimate how much contamination from the soil might travel into groundwater. These estimates are two of the major components in the health risk calculation. uses at Superfund sites, which extends from 1 in 10,000 to 1 in 1 million. As shown in table 2.2, eight states chose the more stringent risk level of 1 in 1 million for individual carcinogens in soil, while five states chose the somewhat less stringent risk level of 1 in 100,000. For noncarcinogens in soil, 11 states used the same measure that EPA uses at Superfund sites, while 2 states used a somewhat more stringent measure. Ten of the 13 states considered factors in addition to health risks when setting their soil standards. As a result, their standards could be either more or less stringent than those based solely on estimates of health risks. These other factors included the following: Chemical levels that occur naturally in the environment. In some locations, certain contaminants may exist naturally in the soil in concentrations differing from those that would be allowed under standards based on risks to human health. For such contaminants, the states typically set their standards at the naturally occurring levels rather than at the levels based solely on risk. In some cases, this practice would result in less stringent cleanups than would be necessary to meet the risk-based standards. However, since some chemicals do not occur naturally in the environment, this practice would in some instances result in more stringent cleanups than would otherwise be required. Detection limits and practical quantification limits. When the concentrations of some contaminants that could remain in the soil without posing health risks fell below the levels that can be accurately measured or detected by current technology, the states said that they typically adopt less stringent, but measurable, concentrations as their standards. Secondary, or aesthetic, criteria. Some chemicals cause unpleasant odors or other problems at levels that do not pose human health risks. The states may set their standards for these chemicals below risk-based levels to protect the public from such problems. Twenty of the 33 states we surveyed said that they had set some chemical-specific standards that would limit the concentrations of various toxic chemicals that could be present in groundwater at Superfund sites. These states not only adopted some of the existing federal standards, such as MCLs, but also set some standards in addition to or different from them. The number of chemical-specific standards per state ranged from about 30 to nearly 600. While the remaining states that we surveyed had not developed any of their own groundwater standards, the federal MCLs are typically used as Superfund cleanup standards for groundwater. Nineteen of the 20 states had based their groundwater standards, at least in part, on estimates of the human health risks posed by exposure to chemical contaminants. (See table 2.3.) In the remaining state, none of the officials currently involved in implementing the standards could provide historical information on how the standards had been developed. Sixteen of the states had calculated their own health risk estimates when setting the standards for at least some of the contaminants. Three of the states had not predominantly developed their own estimates but had instead adopted standards developed by others, including some or all of the MCLs, that were based on estimates of health risks. All 16 states that had developed formulas for calculating human health risks had used guidance from EPA on how to estimate such risks, either alone or in combination with their own procedures and formulas. (See table 2.4.) In setting their standards, 13 of these states used a risk level of 1 in 1 million for individual carcinogens, while 3 states used the less stringent risk level of 1 in 100,000. For individual noncarcinogens, 15 states used a measure that was as stringent as EPA’s, while 1 state used a more stringent measure. All but 2 of these 16 states said that they had considered factors in addition to human health risks when setting their groundwater standards. Taking such factors into account can affect the concentration of a chemical that a state will allow to remain under its standard. As a result, a standard may be either more or less stringent than one based solely on human health risks. may require more stringent cleanups than would be required solely on the basis of risk. Because the federal MCLs are typically used as cleanup standards for groundwater used as drinking water at Superfund sites and many of the states based some of their own groundwater standards on the federal MCLs, we compared the states’ standards for contaminants to the corresponding MCLs. We found that if a federal MCL existed for a chemical that was included in a state’s standards, the state usually set its standard at this level. However, a majority of the states had standards for a few chemicals that differed from the MCLs. These standards tended to be more stringent than the MCLs. The states offered a variety of explanations for why their standards were more stringent than the federal MCLs. Two states set more stringent levels for certain contaminants if they could detect the contaminants at levels below the MCLs. Several states reported that some of their standards were more stringent because these standards had not been adjusted, as the MCLs had been, for other factors, such as cost or technical feasibility. Some states’ standards may also have been more stringent because the states had antidegradation policies for groundwater. For example, Wisconsin mandates that the environment be restored to the extent practicable. Consequently, it has set “preventive action limits” for contaminants in groundwater that may be used to determine the extent of the cleanup required at Superfund sites unless it can be shown that meeting such limits would not be technically or economically feasible. All of the preventive action limits are more stringent than the corresponding federal MCLs. They limit the concentrations of chemicals that can cause cancer to one-tenth the concentrations allowed under the MCLs, and they limit the concentrations of chemicals that can cause other health effects to one-fifth the concentrations allowed under the MCLs. However, the state allows exemptions for contaminants that occur naturally at levels exceeding the preventive action limits. Nearly all of the states had only a few, if any, standards for contaminants that were less stringent than the corresponding federal MCLs. However, under SARA, only those numeric standards that are more stringent than the federal standards are to be considered as cleanup levels at Superfund sites. Even though the states have set environmental standards, they have found that applying these standards uniformly to all sites may not be effective because conditions can vary from one hazardous waste site to another. As a result, sites may pose different levels of health risks and may, therefore, require different degrees of cleanup. We examined whether the states (1) allow the level of cleanup determined to be necessary under their standards to be adjusted to take into account site-specific conditions and (2) set different standards for different uses of the land or groundwater (e.g., set more stringent cleanup standards for land that could be used for residential than for industrial purposes). Overall, the states provided more flexibility in applying their soil standards than their groundwater standards. Eight of the 13 states that had soil standards indicated that they allow the extent of the cleanup deemed necessary under their standards to be adjusted for site-specific factors. For example: Georgia’s risk reduction standards include the option of determining cleanup target concentrations for contaminants on the basis of site-specific risk assessments. Minnesota characterized its standards as “quick reference numbers,” rather than fixed limits, that are considered when determining how extensively to clean up a site. Thus, cleanup levels can be tailored to local conditions. For example, if exposure to contaminants in soil were reduced or eliminated because the soil was inaccessible, the cleanup levels would not need to meet the standards. Alternatively, if multiple contaminants with the same toxic effect were found at the same location, the cleanup level for each individual contaminant might be more stringent than the standard. Pennsylvania said that it has developed interim standards pending final regulations for about 100 soil contaminants but considers these to be “worst case” numbers that can be adjusted to reflect site-specific conditions. contaminated soil. Alternatively, under certain conditions, some states allow cleanups to be based on site-specific risk assessments. Three of these states also said that they permitted less stringent cleanup levels than those based on their standards if meeting them was not technologically feasible or if naturally occurring levels of chemicals in the local environment were higher than the levels set by the standards. However, the use of such alternatives was the exception rather than the rule. Some of the states also indicated that even if they do not provide much flexibility in applying their standards, they may permit flexibility in determining how to achieve the required level of protection. For example, instead of requiring costly incineration of contaminated soil to meet its standards, a state may allow the area to be covered with a clay cap so that people cannot come into contact with the contaminants. The states may also provide flexibility by establishing different standards for different projected uses of the land at a site. Ten of the 13 states with soil standards told us they had set such standards. For example, Michigan said that it had defined soil standards for three types of land uses: residential, industrial, and commercial (with two subcategories). Generally, the more stringent standards apply to residential property, since people are more likely to be exposed to contaminants for a longer period of time on residential property than on other types of property. While most states allowed flexibility in their cleanup levels for soil, the states were less flexible in setting cleanup levels for groundwater. The degree of flexibility largely depended on whether the groundwater was considered a potential source of drinking water. place a notice in deed records to inform future property owners of any contamination left on the property. Cleanups under the third standard must also use federal MCLs when available, but for contaminants without corresponding MCLs, site-specific risk-based cleanup levels can be determined on the basis of the site’s projected use. The third standard also requires deed notification. The remaining 16 states indicated that, in general, for groundwater used as drinking water or considered potentially usable as drinking water, their standards were fixed limits that must be achieved during cleanup. Most of these states did say, though, that they allowed certain limited exceptions to their standards or the use of a site-specific risk assessment under some circumstances. For example, if the contaminated water came from an area where the contamination would not immediately threaten communities, a state might let the contamination be reduced naturally over time rather than require that it be cleaned up immediately. The states gave various reasons for the relative inflexibility of their groundwater standards for drinking water. First, some of the states said that they were mirroring the federal MCLs for drinking water, which are also fixed limits. Some of the states also noted that, as discussed in section 2, they consider groundwater that may possibly be used as drinking water as a valuable resource that needs to be conserved. Although the states in our survey told us that their standards for groundwater used as drinking water are relatively fixed, some states also reported that they provided some degree of flexibility by not classifying all groundwater as drinking water. They also set less stringent standards for groundwater that would not be considered a potential source of drinking water. For example, Connecticut’s groundwater classification system acknowledges that in certain areas, such as those that have had long-term industrial or commercial use, the groundwater would not be a suitable source of drinking water unless it were treated. The state does not usually require that the groundwater in such areas be cleaned up to the standards for drinking water. Also, some states do not classify groundwater as drinking water if it has a high mineral content or if it is located in a geological formation that does not yield much water. agricultural purposes, groundwater of special ecological significance (e.g., supporting a vital wetland), and groundwater in urban, industrial, or commercial areas. Seven of these 12 states indicated that site-specific factors can be taken into account when determining the extent of the cleanup needed for these other types of groundwater. For example, Rhode Island told us that it allows the cleanup levels for some contaminants to differ from the levels set in its standards. For example, vapors escaping from volatile organic chemicals in the groundwater could accumulate in overlying buildings and cause potential health effects. In some cases, these vapors could build up and cause threats of explosion. In setting its “urban” groundwater standards, this state conservatively assumed that the buildings would not be ventilated and that the vapors from the underlying groundwater would be trapped in the buildings. However, in deciding how extensively to clean up a site, the state allows for a consideration of site-specific factors, such as depths to groundwater. When site-specific factors are considered, the cleanup levels may not need to be as stringent as the standards alone would require. The Chairmen and Ranking Minority Members of the House Committee on Transportation and Infrastructure and its Subcommittee on Water Resources and Environment asked us to determine whether states (1) when setting numeric standards for cleanups at hazardous waste sites, based them on estimates of the human health risks posed by exposure to contamination and (2) when using standards, provide the flexibility to adjust the level of cleanup prescribed by the standards to take into account the conditions and risks found at individual waste sites. To accomplish these objectives, we conducted a telephone survey of 33 states, receiving a response rate of 100 percent. We selected these states because they included approximately 91 percent of the sites that the Environmental Protection Agency (EPA) had included on its National Priorities List (NPL) as of April 1995. We obtained information for standards for contaminants in soil and groundwater, the two media most frequently cleaned up at Superfund sites. (See app. II for a list of the states, the number of NPL sites in each state, the types of standards in each state, and the types of authority for the standards.) We defined standards as limits on the concentrations of toxic chemicals in soil and groundwater and included limits promulgated in a state’s laws or regulations or established as guidance or policy. We also included in our definition only standards that might be used as the basis for setting cleanup levels at a Superfund facility. Because petroleum spills are not covered under Superfund legislation, we excluded states that had established standards only for petroleum products under their separate programs for cleaning up leaking underground storage tanks. We excluded states that had simply adopted the federal standards set under the Safe Drinking Water Act or had established antidegradation policies without also setting specific numeric limits on contaminants. The questions in our survey included (1) whether a state’s standards were derived from a risk-based formula and/or other factors, such as the naturally occurring levels of contamination in the soil and groundwater; (2) whether the formulas were based on EPA’s guidance or on the state’s own methodologies for estimating human health risks from contamination; (3) what risk levels, such as a 1-in-1-million increased probability of contracting cancer, were used in setting the standards; (4) whether the standards were set for different uses of the land or groundwater; and (5) whether the standards were considered fixed limits or the state provided flexibility to adjust the cleanup levels based on these standards to take into account specific conditions at a site. We interviewed the managers of states’ Superfund programs, technical experts in these programs, and other key officials responsible for developing and/or implementing the states’ standards. When necessary to clarify information, we contacted officials again for follow-up questions. The data we obtained were current as of September 1995. To ensure the accuracy of our information, we provided state officials with a summary of the information we had compiled on their standards for their review. In addition, we provided copies of a draft of our report to EPA officials, including the Director of the Office of Emergency and Remedial Response and officials responsible for working with state Superfund programs, for their review and comment. They said that the report was an accurate discussion of states’ standards and provided several technical changes and clarifications on the Superfund law’s requirements for cleanups. We incorporated their changes and suggestions. We conducted our audit work from March 1995 through March 1996. (continued) Stanley J. Czerwinski, Associate Director Eileen R. Larence, Assistant Director Sharon E. Butler, Senior Evaluator Susan E. Swearingen, Senior Evaluator Luann M. Moy, Senior Social Science Analyst Josephine Gaytan, Information Processing Assistant The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on how states establish and apply environmental standards when cleaning up Superfund sites, focusing on whether states: (1) base their standards on human health risks; and (2) provide flexibility so that the level of cleanup can be adjusted according to the extent of contamination. GAO found that: (1) 20 of the 21 states reviewed base their hazardous waste site standards on the danger posed to human health, and the cost and technical feasibility of achieving them; (2) states base their groundwater standards on existing federal drinking water standards; (3) when states set their environmental standards at levels other than the federal limit, they tend to be more stringent; (4) states provide more flexibility in adjusting the cleanup level when the cleanup involves soil pollution rather than groundwater pollution, in order to reflect a particular site's condition and health risk; (5) more than half of the states with soil standards regularly allow their cleanup levels to be adjusted for site-specific conditions; (6) less than one-fourth of the states with groundwater standards allow their cleanup levels to be adjusted; and (7) those states not allowing cleanup level adjustments view their groundwater as a potential source of drinking water and implement different standards, depending on the projected use of land or groundwater.
BJS was established by the Justice Systems Improvement Act of 1979. In 1995, OMB identified BJS as one of 10 principal statistical agencies within the federal government. As defined by OMB, the statistical activities of statistical agencies include the planning of statistical surveys and studies; and the collection, processing, or tabulation of statistical data for publication, dissemination, research, analysis, or program management and evaluation. BJS publishes annual data on criminal victimization, populations under correctional supervision, and federal criminal offenders and case processing. It provides periodic data series on the administration of law enforcement agencies and correctional facilities, prosecutorial practices and policies, state court case processing, felony convictions, the characteristics of correctional populations, criminal justice expenditure and employment, civil case processing in state courts, and special studies on other criminal justice topics. BJS is organizationally located within the Department of Justice’s Office of Justice Programs (see fig. 1). The highest-ranking executives of BJS (BJS Director) and the department’s Office of Justice Programs (Assistant Attorney General) are both noncareer officials appointed by the President and confirmed by the Senate. Within BJS, only the Director is a noncareer appointee. BJS initiated the Police-Public Contact Survey pursuant to a mandate in the Violent Crime Control and Law Enforcement Act of 1994, which required the Attorney General to collect information on the use of excessive force by law enforcement officers. The data were to be used only for research or statistical purposes and were not to contain any information that could reveal the identity of the victim or any law enforcement officer. BJS fielded its first pilot survey in 1996 with the goal of better understanding the types and frequency of contacts between the police and the public, and the conditions under which force may be threatened or used. The pilot survey consisted of 6,421 respondents. The three subsequent surveys (in 1999, 2002, and 2005) consisted of 80,543, 76,910, and 63,943 respondents, respectively. Multiple reports and press releases may be issued in connection with any of the surveys. The years in which reports and a single press release associated with the 1999 and 2002 surveys were issued are shown in table 1. Over the last several years, various types of guidance have been developed to help federal agencies such as BJS ensure the integrity of statistical information. In 1992, in response to requests from Congress and others as to what constitutes an effective statistical agency, the National Research Council began issuing best-practice guidelines. According to the Committee on National Statistics, which authored the guidelines, the guidelines have been widely cited and used by Congress and federal agencies, and have shaped legislation and executive actions to establish and evaluate statistical agencies. These recommended guidelines, which BJS and other statistical agencies may choose to voluntarily follow, cover the review, approval, and dissemination processes of products issued by federal statistical agencies. In its guideline document, Principles and Practices for Federal Statistical Agencies, the National Research Council indicated, among other things, that statistical agencies should provide high-quality data, take a strong position of independence, be perceived to be free of political interference and policy advocacy, and strive for wide dissemination of their results. In particular, according to the National Research Council, the quality guidelines are to cover the review process and include verification of sources and results, disclosure of limitations, and accuracy of results; approval process and include who has authority over the content and timing of the release of a product, and separation of policy from statistical information; and dissemination process and include the usability of information and its accessibility to a wide range of people. In February 2002 and September 2006, pursuant to the Information Quality Act of 2001, OMB issued policy and procedural guidance to federal agencies, including statistical agencies such as BJS, directing them to develop their own quality guidelines to help maximize the quality, objectivity, utility, and integrity of the information they disseminate. OMB stated that it was essential that federal statistics be collected, processed, and published in a manner that guarantees and inspires confidence in their reliability. Specifically, OMB directed federal agencies to “adopt a basic standard of quality … as a performance goal,” and “take appropriate steps to incorporate information quality criteria into agency dissemination practices.” In response to OMB’s February 2002 guidance, the Department of Justice, Office of Justice Programs, and BJS issued their own guidelines later that year. BJS issued a second edition of its guidelines in 2005. In formulating its guidelines, BJS stated that it sought to provide the public with additional information regarding its methods for ensuring the quality, utility, objectivity, and integrity of the statistics it publicly disseminates. As a component of the Department of Justice, BJS is governed by its own data quality guidelines, as well as the information quality guidelines promulgated by the Office of Justice Programs, Department of Justice, and Office of Management and Budget. The Department of Justice’s Information Quality Guidelines are intended to (1) provide the department’s components with a foundation for developing their own, more detailed procedures, (2) provide guidance to component staff, and (3) inform the public of the agency’s policies and procedures. The Office of Justice Programs’ information quality guidelines require its components—including BJS—to (1) assess the usefulness of the information to be disseminated to the public by continuously monitoring information needs, developing new information sources, or revising existing methods, models, and information products where appropriate; (2) ensure disseminated information is accurate, clear, complete, reproducible, and presented in an unbiased manner by using reliable data sources, sound analytical techniques, and documenting methods and data sources; and (3) protect information from unauthorized access, corruption, or revision. As in the case of the Department of Justice’s guidelines, the Office of Justice Programs provides guidance to its components in developing their own, more specific quality guidelines. For all four reports issued from the two Police-Public Contact Surveys, we found that BJS fully followed all of the review, approval, and dissemination guidelines available at the time of issuance. We considered a guideline to have been fully followed if our independent analysts determined that all aspects of the guideline were followed. (Our methodology for how we determined the extent to which BJS followed the guidelines is explained in app. I.) The extent to which BJS followed applicable, available guidelines when it issued its Police-Public Contact Survey reports is shown in table 2. For the first report issued from the 1999 survey, we found that BJS voluntarily followed the National Research Council’s 10 applicable existing guidelines; for the second report, we found that BJS voluntarily followed those 10, as well as 2 additional guidelines issued since the first report, for a total of 12. For each of the two reports based on the 2002 survey, we found that BJS followed all 23 available data quality guidelines that had by then been issued by the National Research Council, the Department of Justice, the Office of Justice Programs, and BJS itself. The data quality guidelines that BJS followed describe how agencies should review statistical information, obtain the approval of key decision makers, and publicly disseminate the information. While not all of the guideline-issuing organizations addressed the review, approval, and dissemination process, in total across the four organizations—the National Research Council, Department of Justice, Office of Justice Programs, and Bureau of Justice Statistics—all three areas were addressed. Some examples of the guidelines that BJS fully followed in its report issuance process are listed below. (For a complete list of all available data quality guidelines, see appendix II.) Components of the Department of Justice and Office of Justice Programs will review all information dissemination products for their quality (including objectivity, utility, and integrity) before they are disseminated. All BJS reports and other statistical products must be subject to an objective and appropriate verification process conducted by qualified BJS staff other than the author of the report. The statistical agency has recognition by policy officials outside the statistical agency of its authority to release statistical information without prior clearance. The statistical agency has authority for professional decisions over the scope, content, and frequency of data compiled, analyzed, or published. The objectivity of BJS statistics must be vigilantly protected at all times by BJS staff. On the basis of our analysis, BJS successfully followed all applicable quality guidelines for these survey-based statistical reports, which both BJS and we consider to be statistical products covered by the guidelines. Thus, we believe the agency took proper steps to help ensure the accuracy and integrity of the review, approval, and dissemination processes associated with issuing public reports based on the two surveys we reviewed. BJS concurred with our analysis. All of the reports were posted to the BJS Web site, where the information is to be accessible to the general public. For the single press release that was issued—that is, the 2001 press release based on BJS’s 1999 Police-Public Contact Survey—we determined that BJS fully followed 7 of the 10 applicable National Research Council guidelines available at the time. The 7 federal data quality guidelines that BJS fully followed are listed below. (1) A statistical agency should develop an understanding of the validity and accuracy of its data and convey the resulting measures of quality to users in ways that are comprehensible to nonexperts. (2) A statistical agency should use modern statistical theory and sound statistical practice in all technical work. (3) A statistical agency has maintenance of a clear distinction between statistical information and policy interpretations of such information by the President, the secretary of the department, or others in the executive branch. (4) A statistical agency should follow good practice, in reports and other data releases, in documenting concepts, definitions, data collection methodologies, and measures of uncertainty, and in discussing possible sources of error. (5) Effective dissemination programs include policies for the preservation of data that guide what data to retain and how they are to be archived for future secondary analysis. (6) An agency should have an established publications policy that describes, for a data collection program, the types of reports and other data released to be made available, the audiences served, and the frequency of release. (7) Dissemination of data and information (basic series, analytic reports, press releases, public use tapes) should be timely and public. Avenues of dissemination should be chosen to reach as broad a public as reasonably possible. There were 3 other applicable quality guidelines that we determined BJS was not in a position to follow in connection with this issued press release, and a press release based on the 2002 Police-Public Contact Survey findings was not issued. It is important to note that, for reasons discussed later in this report, BJS officials did not believe these guidelines were applicable to its press releases in the first place. Two key factors affected whether and how BJS followed quality guidelines during the review, approval, and dissemination of products issued from the 1999 and 2002 Police-Public Contact Surveys. First, while BJS believed, as noted earlier, that its survey reports were statistical products covered by the quality guidelines, it did not believe that the survey-related press release was a statistical product covered by the quality guidelines. BJS cited a lack of specificity in the National Research Council’s guidelines as a basis for this conclusion. We believe, however, that while BJS’s interpretation of the guidelines was not unreasonable, there was nonetheless sufficient evidence for a different interpretation; namely, that this press release was a statistical product, that the available guidelines did apply, and that BJS was not in a position to meet 3 of 10 guidelines for the single press release issued from the 1999 survey, owing to a second factor. This second factor was the role that certain noncareer appointees outside BJS have the ability to play, pursuant to Department of Justice policy, in the product issuance process. In certain instances, the roles of these non-BJS officials meant that BJS was not in a position to fully follow all guidelines related to agency independence, and this holds the potential for future actual or perceived political interference in BJS’s product issuance process for statistical products. In both written documentation and oral comments, BJS officials stated that they believed they were in full conformance with the National Research Council’s guidelines and disagreed with our determination that the agency was not in a position to follow 3 of 10 guidelines for the 2001 Police-Public Contact Survey press release that was issued from the 1999 survey. The guidelines that we determined BJS was not in the position to fully follow all pertain to the agency’s independence and, in particular, to its control over the issuance of press releases. These guidelines were: (1) The statistical agency has recognition by policy officials of its authority to release statistical information without prior clearance. (2) The statistical agency has authority for professional decisions over the scope, content, and frequency of data compiled, analyzed, or published. (3) The release of information should not be subject to actual or perceived political interference. In particular, the timing of the public release of data should be the responsibility of the statistical agency. BJS officials asserted that, based on their interpretation of the National Research Council’s guidelines, BJS press releases did not qualify as statistical products and, therefore, press releases did not fall within the purview of the council’s guidelines. They also asserted that neither BJS’s own quality guidelines, nor those issued by the Department of Justice and the Office of Justice Programs, apply to BJS press releases. Both BJS and Office of Justice Programs officials stated that the applicability of the council’s guidelines to BJS press releases was, at a minimum, open to question because the council did not state that press releases are data disseminations. In other words, according to BJS and the Office of Justice Programs, press releases are not publications of data, but rather they are simply announcements that a data publication is forthcoming. In its communications with us, BJS stated that many of the guidelines do not apply to press releases but apply only to statistical products. Based on its content rather than its label as a press release, and notwithstanding that the policies and procedures for developing and issuing products labeled by the Office of Justice Programs as press releases differed from policies and procedures for products it labeled as statistical products, we believe there is sufficient evidence for us to conclude that the press release issued from the 1999 Police-Public Contact Survey qualified as a statistical product to which the National Research Council’s quality guidelines appropriately apply. Our analysis of this press release indicated that it was a data-based statistical product, more than simply an announcement that a data publication was forthcoming. In its entirety, the press release consisted of 20 sentences and one table describing the survey’s statistical findings; 3 sentences on the survey’s methodology; and 5 sentences on who prepared the report and how to obtain copies. We found that this press release was a compilation of statistical data that contained no interpretations, conclusions, or policy statements. (See Appendix III for a reproduction of the press release.) In accordance with the council’s guidelines, the release maintained “a clear distinction between statistical information and policy interpretations of such information.” To understand whether the National Research Council was purposeful in not stating that its guidelines were applicable to statistical agency press releases, we contacted the council to seek clarification. Officials from the council’s Committee on National Statistics, which authored the data quality guidelines, stated that although the Principles and Practices document did not specifically state that the guidelines covered the content, scope, and timing of press releases issued by statistical agencies, it was not the committee’s intent to exclude press releases from the guidelines. They stated that, in their view, press releases issued by BJS are statistical products to which it is appropriate to apply the guidelines. BJS and we agree that the National Research Council’s guidelines apply, in general, to statistical products. In asserting that the press release that BJS jointly issued with the Department of Justice and Office of Justice Programs was not a statistical product, BJS correctly noted that the National Research Council did not explicitly state that the guidelines covered press releases. However, given the strong statistical content of the Police-Public Contact Survey press release, we did not believe that such an explication was necessary. Nonetheless, we acknowledge that it is not unreasonable for BJS to reach a different conclusion given the lack of specificity that existed in the council’s printed guidelines. Because BJS’s own data quality guidelines, issued in 2002, state that they “govern all justice statistics that BJS produces and disseminates for the general public, including all statistics that are featured in BJS publications, on the website, and in BJS press releases,” we considered the BJS guidelines to be applicable to press releases, as well. BJS, however, did not hold this view. It is important to note that we are not finding fault with BJS for the conclusions it drew with respect to the applicability of the quality guidelines to its press release issuance process because the National Research Council’s guidelines were not explicit on this matter. Indeed, we noted in our May 2006 report on data quality that 2 of 14 statistical agencies we surveyed stated that there was ambiguity as to whether a statistical press release was a statistical product, and if so, whether statistical agencies could issue them without first getting releases cleared at the departmental level. BJS was among the 14 statistical agencies surveyed, but it was not one of the two agencies reporting ambiguity in whether a statistical press release was a statistical product. Overall, we believe that BJS made a good faith effort to follow the guidelines it deemed to be applicable to the Police-Public Contact Survey products. Deciding which guidelines a statistical agency like BJS should follow is further complicated by the fact that BJS’s parent organizations—the Department of Justice and Office of Justice Programs—have explicitly stated that their own guidelines do not apply to press releases. However, these organizations’ guidelines are intended to be broadly applicable to both statistical and nonstatistical agencies. For example, the Department of Justice comprises 38 separate component organizations that produce a variety of types of information, both statistical and nonstatistical in nature. The Office of Justice Programs is composed of 6 bureaus and program offices, and these, too, produce both statistical and nonstatistical information. Because we believe that press releases issued by the department and the Office of Justice Programs may in some, but not all, instances be statistical products, we do not hold the view that statistical guidelines should be universally applicable to all press releases issued by the Department of Justice and Office of Justice Programs. However, because different interpretations can arise, we believe that clarification regarding which guidelines should be applied under which circumstances—and, specifically, to press releases—would be helpful to statistical agencies that are in situations similar to BJS’s. To address potential discrepancies such as these, in a May 2006 report on the quality of federal data, we recommended that to help improve governmentwide data dissemination practices that would further safeguard the integrity of federal statistical data, OMB should consider how best to address the gaps we identified between agencies’ data dissemination practices and the National Research Council’s guidelines. We noted in that report that OMB, in concert with federal statistical agencies, was developing a governmentwide directive on the release and dissemination of statistical products that, according to OMB officials, parallels the council’s and other generally accepted dissemination practices. We pointed out that it will be important for OMB’s directive to consider, for example, whether the directive should cover principal statistical agencies only, the statistical functions of all agencies, or only statistical products. OMB officials indicated that the guidance is intended to help ensure that statistical products are policy-neutral, timely, and accurate. We recommended that, among other things, OMB include in this directive (1) clear definitions of what is and is not covered, (2) the extent to which agencies should document their data dissemination guidance and how often the guidance should be reviewed, (3) the amount of flexibility agencies have in implementing OMB’s guidance, and (4) procedures for monitoring agencies’ adherence to its directive. To the extent that statistical agencies appropriately follow these practices, the directive could promote more consistent adherence to practices that facilitate broader dissemination of statistical data and enhance its credibility. Although OMB did not provide comments on the recommendations in our 2006 report, an OMB official told us that as of January 2007, OMB was still working on this directive. We believe it remains important for OMB to complete its directive on the release and dissemination of statistical products in order to help safeguard the integrity of federal statistical data, reduce the likelihood that the type of disagreement discussed in this report would recur, and help assure both the actual and perceived independence of BJS. The second key factor that affected whether and how BJS followed guidelines concerned the involvement of noncareer appointees outside of BJS in the press release issuance process, and had implications for BJS’s independence as a statistical agency. Specifically, we determined that BJS was not in a position to fully follow the 3 National Research Council guidelines listed in the previous section for the 2001 press release based on the 1999 survey (the only applicable, available data quality guidelines in place in 2001) because certain noncareer appointees outside of BJS and within the Department of Justice, are vested—pursuant to the Department of Justice’s and Office of Justice Programs’ policies defining the roles and responsibilities of their noncareer appointees—with the ability to participate in the review, approval, and dissemination of press releases. In certain cases, the roles and responsibilities of these noncareer appointees precluded BJS from being in the position to fully follow certain guidelines. The Assistant Attorney General within the Department of Justice’s Office of Justice Programs has general statutory responsibilities with respect to coordinating the activities of that office and its various components, such as BJS. These statutory provisions do not specifically address the Office of Justice Programs’ role with respect to the review, approval, and dissemination of press releases. However, under departmental policy, noncareer appointees within the Department of Justice and outside of BJS have the ability to participate in the press release issuance process. Table 3 shows the type of involvement that the Assistant Attorney General in the Office of Justice Programs and other noncareer appointees generally have had in the press release review, approval, and dissemination process. Appendix IV describes in more detail the responsibilities of these various officials associated with review, approval, and dissemination procedures for both BJS reports and press releases. With respect to the first of the three guidelines, which calls for a statistical agency to have authority to release information without prior clearance, it is our view that BJS was not in a position to follow this independence- related guideline at all because it did not have the ability to do so. This is because press releases are subject to review and approval by not only the BJS Director, but also by other Department of Justice noncareer appointees. Outside of BJS, the noncareer appointees participating in the clearance process are located in the Department of Justice’s Office of Justice Programs (these include the Office’s Chief of Staff, Deputy Assistant Attorney General, and Assistant Attorney General) and Office of Public Affairs. The current Assistant Attorney General and two former Assistant Attorneys General in the Office of Justice Programs told us that there is no written, formal policy or guidance that bounds their input and decision- making roles and responsibilities with respect to BJS press releases. BJS and OJP officials indicated that the Office of Justice Programs’ Assistant Attorney General has ultimate responsibility for the review and approval of BJS press releases. Press releases are issued jointly on letterhead listing BJS and the Department of Justice. The current BJS Director confirmed that publication and dissemination functions for press releases are considered to be within the Assistant Attorney General’s oversight authority. Because the National Research Council stated that an aspect of independence includes “recognition by policy officials outside the statistical agency of its authority to release statistical information without prior clearance,” we concluded that BJS was not in the position to follow this guideline because, as we have stated, we believe the Police-Public Contact Survey press release was a statistical product that BJS could not issue independently. In practice, the ways in which Assistant Attorneys General of the Office of Justice Programs have exercised their authority have varied. For instance, one former Office of Justice Programs’ Assistant Attorney General told us that she placed “self-imposed” limits on her decisions to modify the content of a BJS press release based on her awareness of congressional support for, and her own belief in, the independence of statistical agencies. The current Office of Justice Programs’ Assistant Attorney General told us that she reviews only press releases that contain quotes from the Attorney General. She said that since she assumed her position in 2005, there have been no BJS press releases that have quoted the Attorney General, and she has relied on her Deputy Assistant Attorney General, the BJS Director, and others to ensure the accuracy and clarity of press releases. Nevertheless, the BJS Director must obtain the approval of the Office of Justice Programs’ Assistant Attorney General and other Justice noncareer appointees to issue a press release. With respect to the second guideline, pertaining to the agency’s decisions over the scope, content, and frequency of data compiled, analyzed, or published, we found that BJS was not in a position to fully follow this independence-related guideline. Specifically, we found that BJS could exercise professional decisions about the frequency of data analyzed and published (within available budgets), but did not always have complete control over the scope and content of survey press releases to be issued. As noted above, this was due to the fact that press releases are joint products of BJS, the Office of Justice Programs, and the Department of Justice, and noncareer appointees outside of BJS can become involved in the press release process. BJS’s situation with respect to this second guideline came to the fore during the drafting of a press release in 2005 based on the 2002 Police-Public Contact Survey. The press release that BJS sought to publish would have included the following statistical findings from the accompanying Police-Public Contact Survey report: (1) there was no statistically significant difference between the rates that white and minority drivers reported being stopped by police, and (2) once stopped, a larger percentage of black and Hispanic minority drivers reported police using or threatening to use force against them than did whites. The then-BJS Director and the then-Acting Assistant Attorney General had a difference of opinion regarding the presentation of the second statistical finding, which was included in the Police-Public Contact Survey report. Despite reported efforts on the part of both parties to negotiate alternative language with respect to the content of the press release, they could not resolve their differences and the BJS Director decided that a press release would not be issued. The current BJS Director told us that it is “inconceivable” that the Assistant Attorney General would issue a press release without the BJS Director’s prior approval. According to current BJS officials (both career and noncareer) and the Office of Justice Programs’ Office of Communications staff, during the period 1996-2006, this was the only instance in which a BJS press release was prepared but not issued because the Office of Justice Programs and BJS could not agree on the contents. In all other instances during this period, according to these officials, when the parties disagreed on the content of a press release, they were able to resolve their differences. With respect to the third guideline, pertaining to actual or perceived political interference and the timing of a release, we similarly believe BJS was in not in a position to fully follow this independence-related guideline for the 2001 press release, which, as discussed earlier, we believe to be a statistical product. Although we found no evidence of political interference with the timing of the 2001 survey press release issued from the 1999 survey, we found that BJS does not have complete control over the timing of press releases, as recommended by the National Research Council. Since both noncareer appointees and career officials in the Office of Justice Programs and the Department of Justice have a role in reviewing and approving BJS press releases, they can affect the date that a press release is issued. According to BJS, career and noncareer appointees outside of BJS can delay the issuance of a press release for reasons having nothing to do with political interference, such as a determination that the press release is not sufficiently newsworthy at the time that it was designated to be issued. On balance, we believe that the noncareer appointees who played decision-making roles in the Police-Public Contact Survey press release process that we reviewed acted within the scope of the roles and responsibilities accorded them under Department of Justice policies, and that BJS made a reasonable effort to adhere to all applicable data quality guidelines. The fact that certain noncareer officials have the ability to make decisions that affect BJS’s ability to fully meet federal data quality guidelines suggests, however, that the potential exists for BJS’s review, approval, and dissemination process for statistical products to be subject to political interference. Thus, certain actions by noncareer appointees— though made on the basis of professional judgment—could put them at odds with the very guidelines designed to ensure the statistical independence and integrity of agencies such as BJS. We provided a draft of this report to the Department of Justice for review and comment. On March 13, we received written comments on the draft report from the Office of Justice Programs’ Assistant Attorney General, and the comments are reproduced in full in appendix VI. In her letter, the Assistant Attorney General affirmed several of our findings and agreed that a need exists for clear definitions about what quality guidelines cover. She noted that competing interpretations exist about what constitutes a statistical product and that the federal statistical community would benefit from clarity in this area. However, the Assistant Attorney General disagreed with our characterization of the 2001 Police-Public Contact Survey press release as a statistical product and, therefore, with our conclusion that the National Research Council’s quality guidelines applied to this press release. The Assistant Attorney General stated that “a press release … is a public relations announcement issued to encourage media coverage. The mere presence of statistics in a press release does not transform a press release into a statistical product.” We do not believe and have not stated that the mere presence of statistics in a press release in and of itself transforms it into a statistical product any more than we believe or have stated that labeling a document lacking in statistics but called a statistical product necessarily transforms it into one. The Assistant Attorney General also stated that we “mischaracterized” BJS’s data quality guidelines as applying to press releases because the guidelines apply only to the statistics contained in BJS press releases, and because BJS conforms with OMB, the Department of Justice, and the Office of Justice Programs in considering press releases to be outside the scope of the guidelines. For the following reasons, we maintain that we made a sound decision in applying BJS’s guidelines to the Police-Public Contact Survey press release: (1) BJS’s guidelines state that they “govern all justice statistics that BJS produces and disseminates for the general public, including all statistics that are featured in BJS publications, on the website, and in BJS press releases;” and (2) the Police-Public Contact Survey press release was made up almost entirely of survey statistics, indicating to us that it was a statistical product. In determining that the Police-Public Contact Survey was a statistical product, we felt that the content of the press release was a more important determinant than the label attached to it, or the fact that the processes and staff involved in developing the press release were different from those in BJS reports. The Assistant Attorney General also noted that the National Research Council’s written guidelines did not explicitly cover press releases. Because we agree, we contacted the National Research Council and consulted with officials of the Council’s Committee on National Statistics (the authoring committee of the Principles and Practices). The officials concurred with our view that BJS press releases referring to statistical products (as opposed to press releases about the announcement of a new agency head, for example) are statistical products to which it is appropriate to apply the guidelines. Although the Principles and Practices document did not specifically state that the guidelines covered the content, scope, and timing of press releases issued by statistical agencies, according to these officials it was not the Committee’s intent to exclude such press releases from the guidelines. The Assistant Attorney General also felt that the draft report overstated the potential threats to BJS’s independence because we used the term “statistical products” to refer to press releases. She was concerned with our observation that the potential exists for BJS’s review, approval, and dissemination process for statistical products to be subject to political interference since noncareer officials can affect BJS’s ability to meet federal data quality guidelines. We stand by this conclusion. Department of Justice policy permits noncareer appointees within the Department but outside of BJS to participate in the press release process. At the same time however, we are unaware of anything that prevents future modifications to that policy to similarly allow noncareer appointees to participate in BJS’s report issuance. Thus, we believe that we have correctly assessed the risk of potential or actual threats to BJS’s independence. Finally, the Assistant Attorney General stated that even if the council’s written guidelines explicitly applied to press releases, the BJS director would not adhere to them and no current law can make him do so. We recognize that they are voluntary and not legally required and never have said otherwise. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of the report to the Attorney General, the Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on GAO’s home page at http://www.gao.gov. Please contact Brian Lepore at (202) 512-4523 or leporeb@gao.gov if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report addresses the following two objectives for the 1999 and 2002 Police-Public Contact Surveys, the two surveys for which products had been issued as of February 2007: (1) To what extent did the Bureau of Justice Statistics (BJS) follow available guidelines to help ensure the accuracy and integrity of the review, approval, and dissemination of reports and press releases based on its surveys? (2) What key factors affected whether and how BJS followed available guidelines? In addition, we provide information on scope and methodology changes in the Police- Public Contact Surveys over time (see app. V). To address the first objective, regarding the extent to which BJS followed guidelines, we obtained quality guideline documents from BJS, the Department of Justice’s Office of Justice Programs, the Department of Justice, and the National Research Council. The guidelines that we obtained from these organizations covered the period between February 2001, when the first product based on the 1999 Police-Public Contact Survey was issued, and June 2006, when the most recent product based on the 2002 survey was issued. We included these federal organizations in our review because BJS is a component of the Office of Justice Programs, which in turn is a component of the Department of Justice, and BJS considers itself to be “governed by” the information quality guidelines of these organizations. We included the National Research Council in our review because it is a widely recognized organization that issued guidelines that were intended to be statements of best practice and provide information on what constitutes an effective statistical organization. We also reviewed guidance and directives issued by the Office of Management and Budget (OMB) because OMB is charged with issuing governmentwide policy and procedural guidance to federal agencies, which are then encouraged to issue their own implementation guidelines. We took several steps to determine the extent to which BJS followed the specific quality guidelines that it, the Office of Justice Programs, the Department of Justice, and the National Research Council had issued. From the documents provided by these four organizations, a GAO analyst initially identified a total of 63 guidelines that pertained to product review, approval, and dissemination processes. For verification purposes, a GAO methodologist also reviewed the guideline documents. The GAO methodologist agreed with the auditor that all 63 guidelines were appropriate for inclusion in our review. Because many of the guidelines issued by the four organizations were similar and overlapping, the GAO auditor reduced the list to 24 nonduplicative guidelines. The GAO methodologist again reviewed the work of the auditor, and in all cases agreed with the auditor that similar guidelines were being appropriately grouped. We then developed a data collection instrument to determine whether BJS was following guidelines for the 1999 and 2002 Police-Public Contact Surveys, on which information could be recorded as to whether BJS fully followed, partially followed, or did not at all follow each of the guidelines. We defined “fully” as all aspects of the guideline being followed; “partially” as some, but not all, aspects of the guideline being followed; and “not at all” as no aspects of the guideline being followed. We asked BJS to complete a separate data collection instrument with respect to each of its 1999 and 2002 Police-Public Contact Survey reports and one press release, and to support each response by providing documentary evidence. To decrease the burden on BJS, GAO analysts completed the data collection instrument for 9 of the 24 guidelines, for which we already had sufficient information (for example, documents describing agency processes and procedures, and interviews regarding the roles and responsibilities of noncareer appointees). We provided our assessments regarding these guidelines to BJS and asked officials to either confirm or not confirm them. Two GAO analysts reviewed BJS’s responses and all available supporting documentary and testimonial evidence, and determined whether BJS fully, partially, or did not at all meet each guideline. We provided our findings to BJS for review and comment. BJS’s 1996 and 2005 Police-Public Contact Surveys were outside the scope of our work. We excluded the 1996 survey because that was a relatively small-scale pilot study; and we excluded the 2005 survey, the most recent Police-Public Contact Survey conducted, because no reports or press releases have yet been issued from this work. To address the second objective, regarding key factors that affected whether and how BJS followed guidelines, we reviewed processes and procedures that described the review, approval, and dissemination processes for BJS-generated reports and press releases, with particular interest in identifying the roles of noncareer appointees involved in each of these processes. We also reviewed pertinent statutory provisions relating to the roles and responsibilities of officials with respect to BJS. We conducted in-person interviews with, or obtained written responses to our questions from, noncareer appointees in BJS, the Office of Justice Programs, and the Department of Justice’s Office of Public Affairs. Specifically, we conducted in-person interviews with the current BJS Director and the BJS Director who was involved in the disagreement with the Acting Assistant Attorney General, as well as with the current Assistant Attorney General and Deputy Assistant Attorney General in the Office of Justice Programs. We obtained detailed written responses to our questions from a former BJS Director, the Acting Assistant Attorney General who was involved in the disagreement with the BJS Director, and two former Assistant Attorneys General from the Office of Justice Programs. We conducted a telephone interview with the current Deputy Director of the Department of Justice’s Office of Public Affairs. Among other things, we asked these noncareer appointees to provide us with information about BJS’s process for reviewing, approving, and disseminating reports and press releases; the roles and responsibilities of noncareer appointees in that process; changes, if any, that had occurred in the roles played by noncareer appointees; procedures used to help ensure that BJS reports and press releases were accurate, reliable, and unbiased; and any factors that may have affected BJS’s independence in the product issuance process. Finally, we reviewed the guidelines of BJS, the Department of Justice’s Office of Justice Programs, the Department of Justice, and the National Research Council to determine that they reflected the product issuance processes and to consolidate them in order to eliminate duplication. To determine what changes, if any, have occurred in the scope and methodology of the Police-Public Contact Surveys between 1996 and 2006, which we present in appendix V, we initially developed a matrix of key scope and methodology dimensions, based on a review of the standard social science literature. We then conducted interviews and reviewed documents with respect to these dimensions, for all four Police-Public Contact Surveys—the 1996 pilot survey and the surveys of 1999, 2002, and 2005. We interviewed the current BJS Director and a former BJS Director, and available report authors and the key statistician participating in administrations of the survey, to ascertain their views concerning the intended scope of the four surveys, the methodologies used, scope and methodology changes that were made, and reasons for any changes. We also obtained written responses to our questions from these officials. We conducted a detailed documentary review of the scoping and methodology sections of the issued Police-Public Contact Survey reports and press releases, and extracted information about changes in the data collection instruments used (for example, the numbers and types of questions asked about searches and the use of force). In addition, we reviewed documents prepared by the American Statistical Association and the U.S. Bureau of the Census, which conducted field tests to ensure that Police-Public Contact Survey questions were appropriately devised. In cases where we noted that changes had been made between surveys, we reviewed Census Bureau documentation and interviewed staff and officials at BJS. It was beyond the scope of this review to address any personnel issues that may have arisen in connection with the disagreement over the content of the 2005 draft press release based on the 2002 Police-Public Contact Survey. We conducted our work between April 2006 and January 2007 in accordance with generally accepted government auditing standards. BJS followed numerous recommended data quality guidelines designed to help ensure the accuracy and integrity of the Police-Public Contact Survey products that it issued in 2001, 2002, 2005, and 2006 based on its 1999 and 2002 surveys. The product issuance guidelines were used to aid BJS’s efforts to review, approve, and disseminate these statistical products to the public and others. The guidelines were issued at various points in time by the following organizations: the National Research Council, the Bureau of Justice Statistics, the Department of Justice, and the Office of Justice Programs. In addition to reviewing the guidelines of these four organizations, we also reviewed guidelines and directives issued by the Office of Management and Budget (OMB). However, we did not specifically assess BJS’s practice with respect to following OMB’s guidelines because OMB issued governmentwide policy and procedural guidance to federal agencies that called for agencies to develop their own implementing guidelines. Table 4 shows the guidelines that were available at the time BJS’s 1999 and 2002 Police-Public Contact Survey products were issued, and which guidelines BJS was in a position to follow. Since the inception of the Police-Public Contact Survey in 1996, the BJS Director has been the single noncareer appointee who has had a decision- making role in BJS’s review, approval, and dissemination processes for reports. The BJS Director is a noncareer presidential appointee subject to Senate confirmation. Figure 2 provides an overview of the process followed by BJS in the review, approval, and dissemination of Police- Public Contact Survey reports. As indicated by the figure, the BJS report author and supervisor prepare the draft report for review and approval. The BJS Director reviews the draft, requests any changes, approves the final draft, and transmits a memorandum of notification through the Office of Justice Program’s Assistant Attorney General up the chain of command to the Attorney General. The memorandum contains an abstract of the report, selected survey findings, and a projected release date for the report. BJS sets the release date for 30 days from the date that the Assistant Attorney General signs the memorandum of notification. The report is posted to the Web site at that time, or sooner if the date and time are specified in the notification memo. In contrast to the process followed for survey reports, several noncareer appointees are involved in the organizational review, approval, and dissemination process, as shown in figure 3. As indicated in the figure, in addition to the Director of BJS, there are three noncareer appointees within the Office of Justice Programs who participate in the review and approval process—the Chief of Staff, the Deputy Assistant Attorney General, and the Assistant Attorney General, and at least one noncareer appointee within Department of Justice headquarters: the Director of the Office of Public Affairs. The BJS report author and supervisor jointly work with staff from the Office of Justice Program’s Office of Communications to prepare the press release. The BJS director reviews the draft press release, requests any changes, approves the final draft, and transmits the press release up the chain of command to the Office of Justice Programs’ Assistant Attorney General for review and approval. Following approval by the Assistant Attorney General, the Department of Justice’s Office of Public Affairs reviews the press release for clarity, and the BJS Director then verifies that the information in the press release is accurate. The Department of Justice’s Office of Public Affairs is then responsible for disseminating the press release to Congress, the media, and executive department press offices, while BJS is responsible for disseminating the press release through its Web site. BJS has conducted four Police-Public Contact Surveys as supplements to the National Crime Victimization Survey. The first Police-Public Contact Survey was conducted as a pilot in 1996. Three subsequent, more extensive surveys were conducted at 3-year intervals: 1999, 2002, and 2005. Although we do not discuss the 2005 survey in this report because no reports or press releases have yet been issued from this survey, we present information on the 2005 survey in this appendix because information is available on this survey’s scope and methodology. The scope of the Police- Public Contact Surveys has consistently expanded over time, while the methodology has remained generally consistent. The pilot survey was designed to test whether the survey could be effectively used as a supplement to the National Crime Victimization Survey to collect data on (a) the types of contacts the public have with the police, and (b) police use of force. To conduct this test, BJS employed a representative sample of 6,421 U.S. residents. The pilot survey yielded useful information on the various types of contacts the public had with the police, and whether force was used by the police. However, the sample size of the pilot survey was not sufficiently large for BJS to draw inferences about the extent to which the population at large would report that they experienced “excessive” use of force by police. For its Police-Public Contact Survey in 1999, BJS increased the sample size to a representative sample of 80,543 U.S. residents. The scope of the survey was further enhanced by adding questions about traffic stops (the most common form of public-police contact, as determined in the 1996 pilot survey), and including a question on whether the police used excessive force during any contact with the public. BJS officials told us that they added the traffic stop questions, at least in part, to “address the growing public concern about racial profiling in connection with traffic stops.” In its 2002 survey, BJS expanded and refined its survey questions further. Specifically, according to BJS officials, they added questions that would help BJS estimate the extent to which U.S. residents nationwide would say that (1) they were stopped by the police while driving, (2) they or their vehicle were searched by the police without their permission during a traffic stop, and (3) they were arrested as a result of the search. In addition, BJS officials said that they added questions to estimate differences, if any, among racial groups in their rates of traffic stops at various times of the day, and whether police used force in situations where persons were engaged in such behaviors as arguing with, cursing, or disobeying police. In 2005, according to BJS officials, the scope of the Police-Public Contact Survey was further extended in several ways, including the following: (a) respondents were asked whether they had been arrested for driving under the influence of alcohol during the year (in order to make comparisons with Federal Bureau of Investigation (FBI) arrest rates, so that potential undercounting rates could be determined); (b) respondents were permitted to group themselves into any combination of racial categories (rather than choosing a single category) to better refine respondent demographic status; (c) respondents were asked whether police used force during any of their police contacts during the year, as opposed to the more limiting question in 2002, which was directed only toward the most recent contact with police; and (d) respondents were provided open-ended response fields on the survey instrument to indicate any ways they believed that the police had acted inappropriately toward them. The 1999, 2002, and 2005 Police-Public Contact Surveys have consistently maintained a similar methodology. The methodological dimensions of the surveys that have remained consistent are geographic coverage, target population, sampling design, data collection method, sample size/response rate, survey administration, and sample characteristics (as indicated in table 5). To illustrate, all three surveys involved selecting nationally representative, stratified, multi-cluster samples from the population of U.S. residents 16 years of age or older. The data were collected either through face-to-face or computer-assisted telephone interviews. The surveys were administered during the last 6 months of the year, and the demographic characteristics of the samples were similar across time periods. The 1996 pilot survey differed in several ways from the subsequent three surveys. Specifically, the pilot survey included residents younger than 16, included far fewer people than the subsequent surveys, and limited the sampling to individuals who had participated in the last round of the National Crime Victimization Survey. In addition, the percentage of face- to-face interviews was lower, and the survey administration period was shorter, and during a different time of the year, than in the other three surveys. In addition to the above, Evi. L. Rezmovic, Assistant Director; Ronald S. Fecso, Chief Statistician; Jared A. Hermalin; Karen A. Jarzynka; Amanda K. Miller; Amy L. Bernstein; Geoffrey R. Hamilton; Robert Alarapon; and Tracy J. Harris made key contributions to this report.
The Bureau of Justice Statistics (BJS), a statistical agency of the Department of Justice's Office of Justice Programs, produces a recurring national Police-Public Contact Survey documenting contacts between the police and the public, including instances involving the use or threat of force by police. BJS issues public reports and sometimes press releases from survey results. For reports and a press release issued from the 1999 and 2002 surveys (the most recent available), GAO reviewed (1) the extent to which BJS followed quality guidelines to ensure the accuracy and integrity of its survey-related products, and (2) factors that affected whether and how BJS followed available guidelines. GAO reviewed applicable federal data quality guidelines, policy and procedure documents, and interviewed current and former officials familiar with BJS. BJS followed nearly all quality guidelines for its 1999 and 2002 Police-Public Contact Surveys. Specifically, for the four public reports issued from these surveys, BJS fully followed all data quality guidelines available for reviewing statistical information, obtaining the approval of key decision makers, and publicly disseminating information. These guidelines were issued by the National Research Council, Department of Justice, Justice's Office of Justice Programs, and BJS itself. GAO believes that because BJS followed these guidelines, proper steps were taken to help ensure the accuracy and integrity of the reports. BJS followed 7 of the 10 quality guidelines available for the one press release issued from its 1999 survey, but was not in a position to fully follow 3 other guidelines for reasons discussed below. Two key factors affected whether and how BJS followed quality guidelines. The first concerned different interpretations about certain guideline applicability. BJS considered its survey-related reports--but not its press releases--to be statistical products covered by the National Research Council's guidelines. BJS cited a lack of specificity in these guidelines, which did not specifically state that they were applicable to statistical agency press releases, as a basis for concluding that the survey press releases need not conform to guidelines for statistical products. We believe BJS's position was not unreasonable, and did not find fault with the agency. However, we determined nonetheless that the single press release issued from the 1999 survey was a statistical product, and therefore believe the council's guidelines appropriately applied. Second, certain noncareer appointees outside BJS may, in accordance with Justice Department policy, make decisions about the review, approval, and dissemination of press releases, and BJS press releases are jointly issued with the Justice Department, with input from its Office of Justice Programs. Both conditions can potentially affect BJS's independence. Owing to these conditions, BJS was not, in our view, in a position to meet 3 council quality guidelines related to statistical agency independence, including that it be able to issue statistical products without prior clearance, and control the scope and content of its products. Justice affirmed several of GAO's findings but disagreed with certain GAO conclusions about the applicability of guidelines to a press release. Justice's detailed comments and GAO's response are contained in the report.
The radio frequency spectrum is the resource that makes possible wireless communications and supports a vast array of government and commercial services. DOD uses spectrum to transmit and receive critical voice and data communications involving military tactical radio, air combat training, precision-guided munitions, unmanned aerial systems, and aeronautical telemetry and satellite control, among others. The military employs these systems for training, testing, and combat operations throughout the world. Commercial entities use spectrum to provide a variety of wireless services, including mobile voice and data, paging, broadcast television and radio, and satellite services. In the United States, FCC manages spectrum for nonfederal users under the Communications Act, while NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues as governed by the National Telecommunications and Information Administration Organization Act. FCC and NTIA, with direction from Congress and the President, jointly determine the amount of spectrum allocated for federal, nonfederal, and shared use. FCC and NTIA manage the spectrum through a system of frequency allocation and assignment. Allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users. (Fig. 1 illustrates examples of allocated spectrum uses, including DOD systems using the 1755-1850 MHz band.) In addition, spectrum managers specify service rules, which include the technical and operating characteristics of equipment. Assignment, which occurs after spectrum has been allocated for particular types of services or classes of users, involves providing users, such as commercial entities or government agencies, with a license or authorization to use a specific portion of spectrum. FCC assigns licenses within frequency bands to commercial enterprises, state and local governments, and other entities. Since 1994, FCC has used competitive bidding, or auctions, to assign certain licenses to commercial entities for their use of spectrum. Auctions are a market- based mechanism in which FCC assigns a license to the entity that submits the highest bid for specific bands of spectrum. NTIA authorizes spectrum use through frequency assignments to federal agencies. More than 60 federal agencies and departments combined have over 240,000 frequency assignments, although 9 departments, including DOD, hold 94 percent of all frequency assignments for federal use. Congress has taken a number of steps to facilitate the deployment of innovative, new commercial wireless services to consumers, including requiring more federal spectrum to be reallocated for commercial use. Relocating communications systems entails costs that are affected by many variables related to the systems themselves as well as the relocation plans. Some fixed microwave systems, for example, can use off-the-shelf commercial technology and may just need to be re-tuned to accommodate a change in frequency. However, some systems may require significant modification if the characteristics of the new spectrum frequencies differ sufficiently from the original spectrum. Specialized systems, such as those used for surveillance and law enforcement purposes, may not be compatible with commercial technology, and therefore agencies have to work with vendors to develop equipment that meets mission needs and operational requirements. In 2004, the Commercial Spectrum Enhancement Act (CSEA) established a Spectrum Relocation Fund, funded from auction proceeds, to cover the costs incurred by federal entities that relocate to new frequency assignments or transition to alternative technologies. The auction of spectrum licenses in the 1710-1755 MHz band was the first with relocation costs to take place under CSEA. Twelve agencies previously operated communication systems in this band, including DOD. CSEA designated 1710-1755 MHz as “eligible frequencies” for which federal relocation costs could be paid from the Spectrum Relocation Fund. In September 2006, FCC concluded the auction of licenses in the 1710- 1755 MHz band and, in accordance with CSEA, a portion of the auction proceeds is currently being used to pay spectrum relocation expenses. In response to the President’s 2010 memorandum requiring that additional spectrum be made available for commercial use within 10 years, in January 2011, NTIA selected the 1755-1850 MHz band as the priority band for detailed evaluation and required federal agencies to evaluate the feasibility of relocating systems to alternative spectrum bands. DOD provided NTIA its input in September 2011, and NTIA subsequently issued its assessment of the viability for accommodating commercial wireless broadband in the band in March 2012. Most recently, the President’s Council of Advisors on Science and Technology published a report in July 2012 recommending specific steps to ensure the successful implementation of the President’s 2010 memorandum. The report found, for example, that clearing and vacating federal users from certain bands was not a sustainable basis for spectrum policy largely because of the high cost to relocate federal agencies and disruption to the federal missions. It recommended new policies to promote the sharing of federal spectrum. The sharing approach has been questioned by CTIA—The Wireless Association and its members, which argue that cleared spectrum and an exclusive-use approach to spectrum management has enabled the U.S. wireless industry to invest hundreds of billions of dollars to deploy mobile broadband networks resulting in economic benefits for consumers and businesses. Actual costs to relocate communications systems for 12 federal agencies from the 1710-1755 MHz band have exceeded original estimates by about $474 million, or 47 percent, as of March 2013. The original transfers from the Spectrum Relocation Fund to agency accounts, totaling over $1 billion, were made in March 2007. Subsequently, some agencies requested additional monies from the Spectrum Relocation Fund to cover relocation expenses. Agencies requesting the largest amounts of subsequent transfers include the Department of Justice ($294 million), the Department of Homeland Security ($192 million), the Department of Energy ($35 million), and the U.S. Postal Service ($6.6 million). OMB and NTIA officials expect the final relocation cost to be about $1.5 billion compared with the original estimate of about $1 billion. Total actual costs exceed estimated costs for many reasons, including unforeseen challenges, unique issues posed by specific equipment location, the transition timeframe, costs associated with achieving comparable capability, and the fact that some agencies may not have properly followed OMB and NTIA guidance to prepare the original cost estimate. NTIA reports that it expects agencies to complete the relocation effort between 2013 and 2017. Although 11 of the 12 agencies plan to spend the same amount or more than they estimated, DOD expects to complete the 1710-1755 MHz transition for about $275 million, or approximately $80 million less than its cost estimate. DOD’s cost estimates, some made as early as 1995, changed over time as officials considered different relocation scenarios with differing key assumptions and their thinking evolved about the systems that would be affected, according to DOD and NTIA officials. Cost estimates to relocate military systems from the late 1990s and early 2000s ranged from a low of $38 million to as much as $1.6 billion, depending on the scenario. DOD’s final cost estimate to relocate from the band was about $355 million. DOD officials told us that the relocation of systems from the 1710-1755 MHz band has been less expensive than originally estimated because many of its systems were simply re-tuned to operate in the 1755-1850 MHz band. The auction of the 1710-1755 MHz band raised almost $6.9 billion in gross winning bids from the sale of licenses to use these frequencies. This revenue minus the expected final relocation costs of approximately $1.5 billion suggests that the auction of the band will raise roughly $5.4 billion for the U.S. Treasury. As mentioned above, NTIA reports that it expects agencies to complete the relocation effort between 2013 and 2017; therefore, the final net revenue amount may change. For example, the Department of the Navy has already initiated a process to return almost $65 million to the Spectrum Relocation Fund. DOD’s Office of Cost Assessment and Program Evaluation (CAPE) led the effort to prepare the department’s preliminary cost estimate portion of its study to determine the feasibility of relocating its 11 major radio systems from the 1755-1850 MHz band. To do so, CAPE worked closely with cost estimators and others at the respective military services regarding the technical and cost data needed to support the estimate and how they should be gathered to maintain consistency across the services. The services’ cost estimators compiled and reviewed the program data, identified the appropriate program content affected by each system’s relocation, developed cost estimates under the given constraints and assumptions, and internally reviewed the estimates consistent with their standard practices before providing them to CAPE. CAPE staff then reviewed the services’ estimates for accuracy and consistency, and obtained DOD management approval on its practices and findings. According to DOD officials, CAPE based this methodology on the cost estimation best practices it customarily employs. We reviewed DOD’s preliminary cost estimation methodology and evaluated it against GAO’s Cost Guide, which also identifies cost estimating best practices that help ensure cost estimates are comprehensive, well-documented, accurate, and credible. These characteristics of cost estimates help minimize the risk of cost overruns, missed deadlines, and unmet performance targets: A comprehensive cost estimate ensures that costs are neither omitted nor double counted. A well-documented estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference. An accurate cost estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs. A credible estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions. DOD officials developed the preliminary cost estimate as a less-rigorous, “rough-order-of-magnitude” cost estimate as outlined by NTIA, not a budget-quality cost estimate. Because of this, we performed a high-level analysis, applying GAO’s identified best practices to DOD’s cost estimate and methodology, and did not review all supporting data and analysis. Overall, we found that DOD’s cost estimate was consistent with the purpose of the feasibility study, which was to inform the decision-making process to reallocate 500 MHz of spectrum for commercial wireless broadband use. Additionally, we found that DOD’s methodology substantially met the comprehensive and well-documented characteristics of reliable cost estimates, and partially met the accurate and credible characteristics. Comprehensive—Substantially Met: We observed that DOD’s estimate included complete information about systems’ life cycles, an appropriate level of detail to ensure cost elements were neither omitted nor double-counted, and overarching study assumptions that applied across programs. However, some programs did not list all the discrete tasks required for relocation, and not all the individual programs had evidence of cost-influencing ground rules and assumptions. Well-documented—Substantially Met: We found that management reviewed and accepted the estimate, the estimate was consistent with the technical baseline data, and documentation for the majority of programs was sufficient that an analyst unfamiliar with the program could understand and replicate what was done. However, the documentation also captured varying levels of detail on source data and its reliability, as well as on calculations performed and estimation methodology used, some of which were not sufficient to support a rough-order-of-magnitude estimate. Accurate—Partially Met: We found that DOD properly applied appropriate inflation rates and made no apparent calculation errors. In addition, the estimated costs agreed with DOD’s prior relocation cost estimate for this band conducted in 2001. However, no confidence level was specifically stated in DOD’s cost estimate to determine if the costs considered are the most likely costs, which is required to fully or substantially meet this characteristic. Credible—Partially Met: We observed that DOD cross-checked major cost elements and found them to be similar. However, some sensitivity analyses and risk assessments were only completed at the program level for some programs, and not at all at a summary level. Performing risk assessments and sensitivity analyses on all projects and at the summary level is required to fully meet this characteristic, and is required on a majority of projects and at the summary level to substantially meet this characteristic. Even though DOD’s preliminary cost estimate substantially met some of our best practices, as the assumptions supporting the estimate change over time, costs may also change. According to DOD officials, any change to key assumptions about the bands to which systems would move could substantially change relocation costs. Because decisions about the time frame for relocation and the spectrum bands to which the various systems would be reassigned have not been made yet, DOD based its current estimate on the most likely assumptions, provided by NTIA, some of which have already been proven inaccurate or are still undetermined. For example: Relocation bands: According to DOD officials, equipment relocation costs vary depending on the relocation band’s proximity to the current band. Moving to bands further away than the assumed relocation bands could increase costs; moving to closer bands could decrease costs. In addition, congestion, in both the 1755-1850 MHz band and the potential bands to which its systems might be moved, complicates relocation planning. Also, DOD officials said that many of the potential spectrum bands to which DOD’s systems could be relocated would not be able to accommodate the new systems unless other actions are also taken. For example, the 2025-2110 MHz band, into which DOD assumed it could move several systems and operate them on a primary basis, is currently allocated to commercial electronic news gathering systems and other commercial systems. To accommodate military systems within this band, FCC would need to withdraw this spectrum from commercial use to allow NTIA to provide DOD primary status within this band, or FCC would have to otherwise ensure that commercial systems operate on a non-interference basis with military systems. FCC has not initiated a rulemaking procedure to begin such processes. Relocation start date: DOD’s cost estimate assumed relocation would begin in fiscal year 2013, but no auction has been approved, so relocation efforts have not begun. According to DOD officials, new equipment and systems continue to be deployed in and designed for the current band, and older systems are retired. This changes the overall profile of systems in the band, which can change the costs of relocation. For example, a major driver of the cost increase between DOD’s 2001 and 2011 relocation estimates for the 1755-1850 MHz band was the large increase in the use of unmanned aerial systems. DOD deployed these systems very little in 2001, but their numbers had increased substantially by 2011. Conversely, equipment near the end of its life cycle when the study was completed may be retired or replaced outside of relocation efforts, which could decrease relocation costs. Inflation: Inflation will drive up costs as more time elapses before the auction occurs. In addition to changing assumptions, the high-level nature of a rough- order-of-magnitude estimate means that it is not as robust as a detailed, budget-quality lifecycle estimate, and its results should not be considered or used with the same confidence. DOD officials said that for a spectrum- band relocation effort, a detailed, budget-quality cost estimate would normally be done during the transition planning phase once a spectrum auction has been approved, and would be based on specific auction and relocation decisions. No official government revenue forecast has been prepared by CBO, FCC, NTIA, or OMB for a potential auction of the 1755-1850 MHz band licenses, but some estimates might be prepared once there is a greater likelihood of an auction. Officials at these agencies knowledgeable about estimating revenue from the auction of spectrum licenses said that it is too early to produce meaningful forecasts for a potential auction of the 1755-1850 MHz band. Moreover, CBO only provides written estimates of potential receipts when a congressional committee reports legislation invoking FCC auctions. OMB officials said NTIA, with OMB concurrence, will transmit federal agency relocation cost estimates to assist FCC in establishing minimum bids for an auction once it is announced. OMB would also estimate receipts and relocation costs as part of the President’s budget. OMB analysts would use relocation cost information from NTIA to complete OMB’s estimate of receipts. Although no official government revenue forecast exists, an economist with the Brattle Group, an economic consulting firm, published a revenue forecast in 2011 for a potential auction of the 1755-1850 MHz band that forecasted revenues of $19.4 billion for the band. We did not evaluate the accuracy of this revenue estimate. Like all forecasts, the Brattle Group study was based on certain assumptions. The study assumed that the 1755-1850 MHz band would be generally cleared of federal users. It also assumed the AWS-1 average nationwide price of $1.03 per MHz-pop as a baseline price for spectrum allocated to wireless broadband services, and that the 1755-1780 MHz portion of the band would be paired with the 2155-2180 MHz band, which various industry stakeholders currently support. The study assumed that the 95 MHz of spectrum between 1755 and 1850 MHz would be auctioned as part of a total of 470 MHz of spectrum included in 6 auctions sequenced 18 months apart and spread over 9 years with total estimated net receipts of $64 billion. In addition, the study adjusted the price of spectrum based on the increase in the supply of spectrum over the course of the six auctions, as well as for differences in the quality of the spectrum bands involved. Like all goods, the price of licensed spectrum, and ultimately the auction revenue, is determined by supply and demand. This fundamental economic concept helps to explain how the price of licensed spectrum could change depending on how much licensed spectrum is available now and in the future, and how much licensed spectrum is demanded by the wireless industry for broadband applications. Government agencies can influence the supply of spectrum available for licensing, whereas expectations about profitability determine demand for spectrum in the marketplace. Supply. In 2010, the President directed NTIA to work with FCC to make 500 MHz of spectrum available for use by commercial broadband services within 10 years. This represents a significant increase in the supply of spectrum available for licensing in the marketplace. As with all economic goods, the price and value of licensed spectrum are expected to fall as additional supply is introduced, all other things being equal. Demand. The expected, potential profitability of a spectrum license influences the level of demand for it. Currently, the demand for licensed spectrum is increasing and a primary driver of this increased demand is the significant growth in commercial-wireless broadband services, including third and fourth generation technologies that are increasingly used for smart phones and tablet computers. Some of the factors that would influence the demand for licensed spectrum are: Clearing versus Sharing: Spectrum is more valuable, and companies will pay more to license it, if it is entirely cleared of incumbent federal users, giving them sole use of licensed spectrum; spectrum licenses are less valuable if access must be shared. Sharing could potentially have a big impact on the price of spectrum licenses. In 2012, the President’s Council of Advisors on Science and Technology advocated that sharing between federal and commercial users become the new norm for spectrum management, especially given the high cost and lengthy time it takes to relocate federal users. Certainty and Timing: Another factor that affects the value of licensed spectrum is the certainty about when it becomes available. Any increase in the probability that the spectrum would not be cleared on time would have a negative effect on the price companies are willing to pay to use it. For example, 7 years after the auction of the 1710- 1755 MHz band, federal agencies are still relocating systems. The estimated 10-year timeframe to clear federal users from the 1755- 1850 MHz band, and potential uncertainty around that timeframe, could negatively influence demand for the spectrum. Available Wireless Services: Innovation in the wireless broadband market is expected to continue to drive demand for wireless services. For example, demand continues to increase for smartphones and tablets as new services are introduced in the marketplace. These devices can connect to the Internet through regular cellular service using commercial spectrum, or they can use publicly available (unlicensed) spectrum via wireless fidelity (Wi-Fi) networks to access the Internet. The value of the spectrum, therefore, is determined by continued strong development of and demand for wireless services and these devices, and the profits that can be realized from them. Chairman Udall, Ranking Member Sessions, and Members of the Subcommittee, this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For questions about this statement, please contact Mark L. Goldstein, Director, Physical Infrastructure Issues, at (202) 512-2834 or goldsteinm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Mike Clements, Assistant Director; Stephen Brown; Jonathan Carver; Jennifer Echard; Emile Ettedgui; Colin Fallon; Bert Japikse; Elke Kolodinski; Joshua Ormond; Jay Tallon; and Elizabeth Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Radio frequency spectrum is the resource that makes possible wireless communications. Balancing competing industry and government demands for a limited amount of spectrum is a challenging and complex task. In 2006, FCC completed an auction of spectrum licenses in the 1710-1755 MHz band that had previously been allocated for federal use. As part of an effort to make additional spectrum available for commercial use, DOD assessed the feasibility of relocating 11 major communication systems from the 1755-1850 MHz band. In September 2011, DOD found that it would cost about $13 billion over 10 years to relocate most operations from the 1755-1850 MHz band. GAO was asked to review the costs to relocate federal spectrum users and revenues from spectrum auctions. This testimony addresses our preliminary findings on (1) estimated and actual relocation costs and revenue from the previously auctioned 1710-1755 MHz band, (2) the extent to which DOD followed best practices to prepare its preliminary cost estimate for vacating the 1755-1850 MHz band, and (3) existing government or industry forecasts for revenue from an auction of the 1755-1850 MHz band. GAO reviewed relevant reports; interviewed DOD, FCC, NTIA, and Office of Management and Budget officials and industry stakeholders; and analyzed the extent to which DOD's preliminary cost estimate met best practices identified in GAO's Cost Estimating and Assessment Guide (Cost Guide). Actual costs to relocate federal users from the 1710-1755 megahertz (MHz) band have exceeded the original $1 billion estimate by about $474 million as of March 2013, although auction revenues appear to exceed relocation costs by over $5 billion. Actual relocation costs exceed estimated costs for various reasons, including unforeseen challenges and some agencies not following the National Telecommunications and Information Administration's (NTIA) guidance for preparing the cost estimate. In contrast, the Department of Defense (DOD) expects to complete relocation for about $275 million or approximately $80 million less than its $355 million estimate. According to DOD officials, the relocation of systems from this band has been less expensive than originally estimated because many systems were simply re-tuned to operate in the adjacent 1755-1850 MHz band. The auction of the 1710-1755 MHz band raised almost $6.9 billion in gross winning bids. NTIA expects agencies to complete the relocation effort between 2013 and 2017; therefore, final net auction revenue (auction revenue less relocation costs) may change. DOD's preliminary cost estimate for relocating systems from the 1755-1850 MHz band substantially or partially met GAO's best practices, but changes in key assumptions may affect future costs. Adherence with GAO's Cost Guide helps to minimize the risk of cost overruns, missed deadlines, and unmet performance targets. GAO found that DOD's estimate substantially met the comprehensive and well-documented best practices. For instance, it included complete information about systems' life cycles and documentation for the majority of systems was sufficient. However, not all programs had evidence of cost-influencing ground rules and assumptions, and some of the source data were insufficient. GAO also determined that DOD partially met the accurate and credible best practices. For example, DOD applied appropriate inflation rates and its estimated costs generally agreed with its 2001 cost estimate for this band. However, DOD did not develop a confidence level, making it difficult to determine if the costs considered are the most likely costs, and DOD only completed some sensitivity analyses and risk assessments at the program level for some programs. DOD officials said that changes to key assumptions could substantially change its costs. Most importantly, decisions about which spectrum band DOD would relocate to are still unresolved. Nevertheless, DOD's cost estimate was consistent with its purpose--informing the decision to make additional spectrum available for commercial wireless services. No government revenue forecast has been prepared for a potential auction of licenses in the 1755-1850 MHz band, and a variety of factors could influence auction revenues. One private sector study in 2011 forecasted $19.4 billion in auction revenue for licenses in this band, assuming that federal users would be cleared and the nationwide spectrum price from a previous auction, adjusted for inflation, would apply to this spectrum. The price of spectrum, and ultimately auction revenue, is determined by supply and demand. The Federal Communications Commission (FCC) and NTIA jointly influence the amount of spectrum allocated to federal and nonfederal users (the supply). The potential profitability of a spectrum license influences its demand. Several factors would influence profitability and demand, including whether the spectrum is cleared of federal users or must be shared.
Federal employees, by law, are entitled to receive fair and equitable treatment in employment without regard to their sex, among other things. In addition, any federal employee who has the authority to take, recommend, or approve any personnel action is prohibited from discriminating for or against any employees or applicants for employment on the basis of their sex. These rights are set forth in title VII of the Civil Rights Act of 1964, as amended, and the Civil Service Reform Act of 1978. In 1980, the Equal Employment Opportunity Commission (EEOC) issued regulations recognizing sexual harassment as an unlawful employment practice. Subsequent case law clarified that unlawful sexual harassment exists when unwelcome sexual advances, requests for sexual favors, or other verbal or physical conduct of a sexual nature are committed as a condition of employment or basis for employment action (“quid pro quo”), or when this conduct creates a hostile work environment. A key word is “unwelcome,” because unlawful sexual harassment may exist when the target perceives that he or she is being harassed, whether or not the perpetrator intended to create a hostile environment. EEOC has the authority to enforce federal sector antidiscrimination laws, issuing rules and regulations as it deems necessary to carry out its responsibilities. It issued revised guidelines for processing EEO complaints, including sexual harassment, that became effective in October 1992. NIH is one of several Public Health Service agencies within HHS and is the principal biomedical research agency of the federal government. It supports biomedical and behavioral research domestically and abroad, conducts research in its own laboratories and clinics, trains researchers, and promotes the acquisition and distribution of medical knowledge. NIH is made up of 26 ICDs, each of which has its own director and management staff. Its 13,000 employees are primarily located in the Bethesda, Maryland, area. Our objective was to obtain information on the extent and nature of sexual harassment and sex discrimination at NIH, to provide a systematic overview of an issue that had received media attention based on individual allegations. To accomplish this, we reviewed sexual harassment and sex discrimination complaints filed by NIH employees and conducted a projectable survey of NIH employees. We also interviewed agency officials at NIH, the Public Health Service, and HHS involved with handling such situations in order to familiarize ourselves with EEO-related activities. We obtained statistics on formal sexual harassment and sex discrimination complaints that were filed between October 1, 1990, and May 31, 1994, and reviewed those complaints filed during this period and subsequently closed. We also reviewed 20 complaints that were handled as part of NIH’s expedited sexual harassment process between September 1, 1992, and May 31, 1994. Under this accelerated procedure, officials from the involved ICD were required to immediately advise OEO officials about any sexual harassment allegations that came to their attention. OEO was then required to complete its inquiry within 2 weeks. NIH’s EEO complaint process is outlined in greater detail in appendix I. We did not compare the number and type of complaints filed by NIH employees with those filed by employees at other governmental institutions. To obtain an agencywide perspective on the sexual harassment and sex discrimination environment at NIH, we sent questionnaires to a stratified random sample of 4,110 persons who were NIH employees as of the end of fiscal year 1993. We asked these employees for their insights, opinions, and observations (anonymously) about sexual harassment and sex discrimination at NIH as well as their opinions about NIH’s EEO system. The results of our survey, which can be projected to the universe from which it was selected, are shown in their entirety in appendix II. The overall usable response rate was 64.3 percent. The percentages presented in this report are based on the number of NIH employees who responded to the particular question being discussed. Because the survey results come from a sample of NIH employees, all results are subject to sampling errors. For example, the estimate that 32 percent of the employees have experienced sexual harassment is surrounded by a 95 percent confidence interval from 30 to 34 percent. All of the survey results in this report have 95 percent confidence intervals of less than + 5 percent unless otherwise noted. All reported comparisons of female and male responses are statistically significant unless otherwise noted. It should be noted that our questionnaire methodology, which is described in greater detail in appendix III, did not include comparing NIH with other governmental institutions. We also contacted agency officials at NIH, the Public Health Service, and HHS to obtain estimated costs associated with processing sexual harassment and sex discrimination complaints. Information regarding the limited data that were available is covered in appendix IV. Our work was done at NIH’s Bethesda, Maryland, location from May 1993 to May 1995, in accordance with generally accepted government auditing standards. We requested comments from the Secretary, HHS; the Assistant Secretary for Health, HHS; and the Director, NIH on a draft of this report. Their consolidated comments are discussed on p. 16 and presented in appendix V. Approximately 32 percent of NIH employees reported that they were the recipients of some type of uninvited, unwanted sexual attention in the past year, and employees filed 32 informal complaints and 20 formal complaints with NIH’s OEO between October 1990 and May 1994. These complaints were filed primarily by female employees. Closed formal complaints we reviewed overwhelmingly identified immediate supervisors and/or management officials as the alleged harassers. However, employees in general did not consider these groups to be the only sources of sexual harassment at NIH. Coworkers and contractors were also identified as alleged harassers. Actions reportedly taken most often by sexually harassed employees to deal with their situations included ignoring the situation or doing nothing, avoiding the harasser, asking/telling the harasser to stop the offensive behavior, discussing the situation with a coworker and/or asking the coworker to help, or making a joke of the situation. Over 96 percent of NIH employees who said they were sexually harassed reported that they decided not to file complaints or take some other personnel action. Some of the more prevalent reasons employees gave for choosing not to file EEO complaints, grievances, or adverse action appeals were that (1) they did not consider the incident to be serious enough, (2) they wanted to deal with it themselves, and/or (3) they decided to ignore the incident. Also, some of the employees who chose not to file complaints believed the situation would not be kept confidential, the harasser would not be punished, filing a complaint would not be worth the time or cost, and/or that they would be retaliated against. Although it remains small as a proportion of the workforce, the number of EEO complaints filed by NIH employees alleging sexual harassment has increased in recent years. Of the 20 formal complaints filed between October 1, 1990, and May 31, 1994, none were filed in fiscal year 1991; 4 and 7 were filed in fiscal years 1992 and 1993, respectively; and 9 were filed during the first 8 months of fiscal year 1994. Although 53 percent of employees reported they thought NIH did a somewhat good to very good job taking action against employees who engage in sexual harassment, about 27 percent of employees reported they thought NIH did a somewhat poor to very poor job. (See app. II, p. 31.) Our review of sexual harassment complaint files and statistics showed that no determinations or findings of sexual harassment had been made on formal EEO complaints filed by NIH employees that were closed between October 1991 and May 1994. It should be noted, however, that actions could be and have been taken against alleged harassers without a formal admission that harassment actually occurred. For the most part, employees reported they believed NIH was doing a good job of informing them about the nature of sexual harassment, the policies and procedures prohibiting it, and the penalties for those who engage in sexual harassment. NIH also got good reviews from its employees for encouraging them to contact ICD EEO officers and/or OEO regarding any sexual harassment concerns. Only 5.5 percent of employees viewed sexual harassment to be more of a problem at NIH than it was a year earlier, and 34.5 percent of the employees did not perceive sexual harassment to be a problem at all at NIH. However, many employees perceived NIH as doing a poor job of counseling victims of sexual harassment (20.8 percent), preventing reprisal/retaliation for reporting sexual harassment (22.2 percent), and taking action against those who harassed others (26.9 percent). With regard to their respective ICDs, 2.3 percent of the employees believed the problem had become more serious while 52.2 percent of employees did not consider sexual harassment to be a problem at their ICDs. (See table 1.) Two-thirds of the employees—67.1 percent—believed enough was being done by NIH to eliminate sexual harassment. This sentiment was echoed by 72.3 percent of employees about their respective ICDs and 74.7 percent of employees about their immediate supervisors. (See app. II, p. 23.) Women reported being harassed more often than men (37.7 percent compared to 23.8 percent), and women employees at NIH perceived sexual harassment to be a more serious problem than did men (21.3 percent compared to 8.2 percent). Male and female employees who said they experienced sexual harassment indicated that most of the uninvited, unwanted sexual attention consisted of gossip regarding people’s sexual behavior; sexual jokes, remarks, and teasing; and negative sexual remarks about a group (e.g., women, men, homosexuals). For the most part, employees reported that it was instigated by coworkers, supervisors, and/or contractors who worked on the NIH campus. Very few employees said that the sexual harassment they experienced included receiving or being shown nude or sexy pictures (4.8 percent); being pressured for a date (4 percent); receiving requests or being pressured for sexual favors (1.5 percent); receiving letters, phone calls, or other material of a sexual nature (1.4 percent); and threatened, attempted, or actual rape or sexual assault (0.4 percent). The employees who made these claims also said these situations had not occurred repeatedly—once or twice during the last year. (See app. II, p. 25.) Thirteen percent of NIH employees indicated to us that they believed they had experienced sex discrimination over the last 2 years. Of the 13 percent, approximately half chose to take some type of action regarding their situation. Many of these employees said they came forward and discussed their experiences with an EEO official, their immediate supervisor, and/or some other non-EEO official. However, about 10 percent of employees who alleged discrimination reported that they took the next step and filed an EEO complaint, grievance, or adverse action appeal with the appropriate NIH office. Some of the more prevalent reasons why employees chose not to file actions were concerns that they would not be treated fairly, that filing a complaint would not be worth the time or cost, that they would be retaliated against, that the situation was not serious enough, and/or that the situation would not be kept confidential. Many employees also decided to ignore the situation or to try to deal with their situations themselves. Between October 1990 and May 1994, 209 informal and 111 formal sex discrimination complaints were filed by female and male employees at NIH. Formal complaints that were closed during this time period were filed for multiple reasons, the most common being nonselection for promotion, lack of promotion opportunity, and objection to job evaluation ratings. The alleged discriminators were people with authority over the complainants and could therefore alter the conditions under which the complainants worked. Within NIH, more than half of the women employees (58.4 percent) said they believed the current sex discrimination situation to be as much of a problem as it was 1 year earlier, and 37 percent of the men said the same. Although the percentages were small, a larger percentage of men (7.2 percent) than women (6.1 percent) considered the problem to be at least somewhat worse. Also, 30.6 percent of male employees did not perceive sex discrimination to be a problem at NIH, a belief echoed by only 17.6 percent of female employees. (See fig. 1.) Men and women were divided, even within their own gender groups, in their belief as to whether NIH was doing enough to eliminate sex discrimination in the workplace. While the majority of men believed NIH was doing enough (71 percent), a number of men disagreed (17 percent). Women’s views were also divided—about 48 percent of the women expressed the view that NIH was doing enough to eliminate sex discrimination, but 33 percent disagreed. Many NIH employees reported they believed women and men were not given comparable opportunities and rewards at their ICDs. Approximately one out of five employees (20.2 percent) did not believe that women and men at NIH were paid the same for similar work or that men and women were formally recognized for similar performance at the same rate (19.7 percent). Nearly one out of three employees (30.1 percent) reported they did not believe men and women were promoted at the same rate when they had similar qualifications. A number of employees also reported they observed that women and men at NIH did not have similar opportunities for visibility (15.5 percent) or similar success finding mentors (22.8 percent), nor did they get equally desirable assignments (19.0 percent). About 44 percent of the employees reported they believed family responsibilities kept women at NIH from being considered for advancement more than they did for men and about 50 percent expressed the view that an “old boy network” prevented women at NIH from advancing in their careers. For each of these topics, female employees responded more strongly than their male counterparts, and the differences in their responses are statistically significant at the 95 percent confidence level. About 35 percent of employees reported they thought NIH did a somewhat poor to very poor job taking action against employees who engaged in sex discrimination. Our review of sex discrimination complaint files and statistics showed that no determinations or findings of sex discrimination had been made on formal EEO complaints filed by NIH employees that were closed between October 1991 and May 1994. It should be noted, however, that actions could be and have been taken against alleged discriminators without a formal admission that discrimination actually occurred. Although the management of NIH is highly decentralized, with each ICD largely responsible for its own management, the controversies that emerged in 1991 and 1992 over sex discrimination, sexual harassment, and racial discrimination were directed at the NIH Director, who was expected to address them on an agencywide basis. Partly in response to these controversies, NIH management has, in recent years, taken actions aimed at improving the agency’s EEO climate. Beginning with the fiscal year 1993 rating period, EEO became a critical element on managerial performance ratings and can have an impact on overall ratings and determinations of pay increases. NIH management also issued policy statements to employees and managers expressing its commitment to a discrimination-free environment. Several employee task forces were also established at NIH, such as the Task Force on Intramural Women Scientists and the Task Force on Fair Employment Practices. These groups, respectively, addressed issues such as differences in pay and status between male and female scientists with comparable backgrounds and experiences and improvements for processing reprisal complaints (the latter has been incorporated into NIH EEO policy). NIH officials recently conceded that pay discrepancies exist between male and female scientists, and they are acting to bring female scientists’ salaries in line with those of their male peers within their respective ICDs. An EEO hotline was operational from June 1993 through April 1994 to permit employees to call in and informally report EEO situations they were uncomfortable about. ICD officials were responsible for preparing reports about these inquiries. NIH management’s actions to better its EEO climate appear to have been positive ones. However, in light of the history of controversy surrounding EEO issues at NIH and the public focus of those issues on the office of the NIH Director, our review suggested additional steps that could be taken to further improve the environment and to provide information to the NIH Director to assist him in ensuring that the EEO climate continues to improve and problems are addressed as they emerge. NIH and HHS have been unsuccessful at meeting time frame requirements for processing sexual harassment and sex discrimination complaints filed by NIH employees. Federal regulations generally require that an agency provide the complainant with a completed investigative report within 180 days of accepting a formal complaint. Of the 119 formal sexual harassment and sex discrimination complaints filed between October 1, 1990, and March 31, 1994, 63 were still open as of April 30, 1995. All of these cases had been open for more than 1 year. Of the 56 cases that were closed by the end of April 1995, only 19 were closed within 180 days of the date the complaint was filed. Twenty-five of them were open for more than 1 year before being closed. (See fig. 2.) Responses to our questionnaire indicated that although about 32 percent of NIH employees said they experienced sexual harassment and approximately 13 percent said they believed they were discriminated against because of their sex, substantially fewer employees reported to NIH that they had experienced such situations. The limited reliability of complaint data in assessing the overall climate of an agency, along with the independent nature of the ICDs, makes it difficult for NIH management to assess the sexual harassment and sex discrimination environment. Agencywide information on how employees view these issues would aid management in making such an assessment; however, such information currently is not being collected. Through EEO training, attempts were made by NIH to educate employees about what actions or behaviors constitute sexual harassment and sex discrimination, how to prevent such situations, and what recourse employees have to deal with them. Many of the issues surrounding sexual harassment involve dealing with people, such as being sensitive to others in the workplace, being able to confront someone tactfully, treating people fairly, and maintaining a professional atmosphere. Some employees may actually be unaware that their actions are perceived by others as sexual harassment. Some employees may not realize that the actions of others are in fact sexual harassment and/or sex discrimination and that they do not have to tolerate these actions. Within NIH, the ICDs have been delegated the authority to develop and provide their own EEO training programs relating to preventing sexual harassment and sex discrimination. OEO has not monitored the quality, consistency, or frequency of the training provided to individual employees, nor has it provided agencywide criteria regarding the content of the courses provided or which employees should be required to attend. We contacted 10 of NIH’s 26 ICDs about their EEO training efforts. These ICDs employed over 9,200 people, or about 71 percent of NIH’s full-time permanent staff, and varied in size from 150 to over 2,000 employees. All 10 ICDs offered some form of sexual harassment prevention training. Six ICDs required all of their employees to receive such training, three ICDs required this training only for managers and supervisors, and one ICD had no attendance requirements. Most of the ICDs chose either to conduct their own training sessions or to have OEO conduct the training. In a few cases, the training was developed and/or presented by contractors. Five of the ICDs offered sexual harassment prevention training as recently as fiscal year 1994. However, one ICD last offered training in fiscal year 1991. The training sessions generally ranged from 2 to 4 hours. None of the ICDs reported offering training that specifically dealt with preventing sex discrimination. Any such training was to have been included with other training. As with the sexual harassment prevention training, the EEO training varied in length, recency (from fiscal year 1991 to fiscal year 1994), source of design, and target audience. Three of the 10 ICDs we contacted required their managers and supervisors to attend. Even though OEO did not provide standardized, scheduled training for NIH employees or maintain any data on the training provided to them by their respective ICDs, many employees considered themselves to be well informed about sexual harassment and sex discrimination. Most employees reported they believed that NIH did a somewhat good to very good job informing them about current policies and procedures prohibiting sexual harassment (85.9 percent) and behaviors or actions that constitute sexual harassment (80.0 percent). Similarly, a majority of employees also reported they believed that NIH did a somewhat good to very good job informing them about the penalties for those who engage in sexual harassment (63.1 percent). A large majority of employees reported they believed that NIH did a somewhat good to very good job informing them about current policies and procedures prohibiting sex discrimination (72.7 percent) and behaviors or actions that constitute sex discrimination (67.3 percent). However, about one out of four employees (24.9 percent) stated that NIH did a somewhat poor to very poor job of informing them about the penalties for those who engage in sex discrimination. Overall, 65.2 percent of NIH employees reported they believed NIH did a somewhat good to very good job informing them about their rights and responsibilities under federal government EEO regulations. They were less positive in their beliefs about how well NIH informed them about the roles of EEO officials, counselors, and investigators (51.9 percent good, 26.7 percent poor) and about the various complaint channels open to them (53.6 percent good, 26.2 percent poor). Employees also believed NIH did a somewhat better job of helping managers/supervisors develop an awareness of and skills in handling EEO problems (63.0 percent good, 20.9 percent poor) than it did for employees (53.2 percent good, 25.2 percent poor). At NIH, we found no agencywide record maintenance or tracking of problem areas or trends for situations handled at the ICD level. NIH management empowered the ICDs with responsibility for resolving situations in the hopes that their early resolution would prevent barriers from being created that would hinder productivity and/or cause employees to remain in hostile work environments for unnecessarily long periods of time. Regarding alleged sex discrimination, employees had the option of contacting the EEO officer in their respective ICDs to try to resolve their situations before filing a complaint with OEO. We found that ICD officials were not required to notify OEO officials of any recurring problems, behavioral patterns, or trends they identified when dealing with employees’ concerns about sex discrimination, thus depriving OEO officials and NIH employees of an overview of NIH’s EEO environment. While most NIH employees do not perceive sexual harassment and sex discrimination to be serious problems at NIH, and the number of those who believe progress has been made outweighs those who do not, a significant minority of NIH employees are still clearly concerned about the continuing existence of sexual harassment and sex discrimination at their agency. In order for NIH efforts against sexual harassment and sex discrimination to be successful, employees need to trust that the processes established for dealing with their concerns about sexual harassment and sex discrimination will produce results in a timely manner. To date, NIH and HHS have not met time frames established by federal regulations in handling many of the formal complaints filed by NIH employees. Because of the number of independent organizations operating under the NIH structure and the absence of reliable indicators on the extent to which sexual harassment and sex discrimination are occurring, we believe that looking at the agency “as a whole” could enable NIH to better determine the overall state of its sexual harassment and sex discrimination situations. Such an overall assessment would also provide agencywide information for the NIH Director to permit him to identify the existence of emerging EEO problems and to resolve them more expeditiously. For example, periodically using an NIH employee attitude questionnaire, such as the one we developed, would assist NIH in identifying problems that have occurred or acknowledging any progress that has been made in dealing with such situations. NIH has attempted to deal with employee concerns about sexual harassment and sex discrimination by increasing awareness about workplace relationships and improving agencywide communication through training. However, we noted that NIH lacks minimum standards with regard to course content and has not communicated its expectations on which employees should receive such training and on how frequently it should be provided. Moreover, NIH has not monitored training to ensure that its expectations regarding such training are being fulfilled. We recommend that the Secretary of HHS and the Director of NIH take steps to decrease the time it takes to process and resolve sexual harassment and sex discrimination complaints at NIH. In addition, because the Director is responsible for ensuring an appropriate EEO climate throughout NIH despite the decentralized management structure and practices of the agency, we also recommend that he take further steps to provide guidance for and monitoring of the agency’s EEO program. In doing so, we recommend he consider such steps as periodically conducting an employee attitude survey, such as the one we developed, so that the existence of sexual harassment and sex discrimination trends and problems can be more easily identified and dealt with; and establishing minimum standards for sexual harassment and sex discrimination-related training offered to NIH employees as well as procedures for monitoring the implementation of the training to ensure that employees participate as intended. We requested comments from the Secretary, HHS; the Assistant Secretary for Health, HHS; and the Director, NIH on a draft of this report. The Department responded with consolidated comments, which are presented in appendix V. The Department concurred with each of our recommendations and indicated that steps are under way to implement them. We believe that the steps outlined in the Department’s letter, if successfully implemented, will achieve the objective of our recommendations. As agreed with you, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will provide copies to the Secretary, Department of Health and Human Services; the Director, National Institutes of Health; and the Chairman and Ranking Minority Member of the Subcommittee on Civil Service, House Committee on Government Reform and Oversight. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix VI. If you have any questions about the report, please call me on (202) 512-8676. Federal regulations (29 C.F.R. Part 1614) state that agencies should provide prompt, fair, and impartial processing of EEO complaints, including those related to sexual harassment and sex discrimination. The federal EEO complaint filing process consists of two phases, informal and formal. Figure I.1 details the process and the time frames stated in the regulations. Once an employee has exhausted all options available through this process, he/she can appeal to the EEOC and/or through the court system. An NIH employee who believes he/she has been sexually harassed or discriminated against because of his/her sex can seek advice or assistance from various sources before filing an informal complaint. A supervisor or other management official can initially become involved to assist in resolving the situation at an early stage, or the employee can go directly to the EEO officer at the ICD where he/she works. If the situation cannot be resolved, or if the employee chooses not to have ICD officials address the situation, an informal complaint can be filed with NIH’s OEO. An employee who believes he/she has been sexually harassed or discriminated against because of his/her sex has 45 days from the alleged event to file an informal complaint with the OEO. An OEO-appointed counselor is allotted 30 days to attempt to resolve the matter by contacting employees associated with the situation. If the situation is not resolved within 30 days from the start of counseling (and the involved parties have not agreed to an extension), the complainant is to be given a counselor’s inquiry report and notified of the right to file a formal complaint within 15 days with HHS’s Office of Human Relations. HHS has responsibility for deciding whether to accept a complaint, hiring investigators, determining whether sexual harassment or sex discrimination has occurred, and arranging settlements. An accepted formal complaint is investigated by an independent contractor. The agency has 180 days to complete the investigation and provide the complainant with a report. If the complainant is not satisfied with the results of the investigative report, he/she is given appeal rights and has 30 days (from receipt) to request a hearing from the EEOC or an agency decision from HHS. Congress has requested that the U.S. General Accounting Office (GAO), an independent agency of Congress, review the extent and type of sexual harassment and sex discrimination that may be happening at the National Institutes of Health (NIH). To do this, we are surveying a randomly selected sample of NIH employees. This questionnaire asks about your experiences at NIH and your opinions about NIH’s Equal Employment Opportunity (EEO) system, including the EEO complaint process. The responses of all NIH employees included in our sample are very important in order for us to accurately measure the occurrence of sexual harassment and sex discrimination at NIH. Because these are sensitive topics, the survey is anonymous. We cannot identify you from this questionnaire. If you have any questions, please call Ms. Jan Bogus at (202) 512-8557 or Ms. Annette Hartenstein at (202) 512-5724. With your help, we will be able to identify the problems that affect NIH employees and recommend solutions. The results will be presented in summary form. Any discussion of individual answers will not contain information that can identify you. Thank you for your help. To ensure your privacy, please return the postcard separately from the questionnaire. This will let us know that you completed your questionnaire. This section asks about sexual harassment. Sexual harassment involves uninvited, unwanted sexual advances, requests for sexual favors, and other comments, physical contacts, or gestures of a sexual nature. Such actions may negatively affect one’s career and may create an intimidating, hostile, or offensive environment. 1. As far as you are aware, is sexual harassment currently a problem at NIH and at your institute, center, or division? (Check one box in each row.) (1) (2) (3) (4) (5) (6) (N=4,161) b. At your institute, center, or (N=1,477) Note 1: All “Ns” (number in the population) are estimates based on appropriately weighting the sample results. Note 2: For questions in the matrix format, all percentages are based on those who chose a response other than “No basis to judge.” Note 3: For questions in the matrix format, the “Ns” to the left of the first percentage represent the estimated size of the population who responded with a basis to judge. The “Ns” to the right of the last percentage represent the estimated size of the population who responded with “No basis to judge.” The objective of our questionnaire survey was to obtain information on the extent and type of sexual harassment and sex discrimination that may be happening at the National Institutes of Health (NIH). Using mail questionnaires, we asked about the general climate at NIH regarding sexual harassment and sex discrimination and specifically about the occurrence of behaviors at NIH that respondents considered to be instances of sexual harassment and about the occurrence of situations at NIH that respondents considered to be instances of sex discrimination. For those who indicated that they believed sexual harassment was directed toward them, we inquired about what the respondent did to deal with the situation. We asked a set of similar questions to see how individuals dealt with sex discrimination when it affected them. We also asked for respondents’ views on NIH’s equal employment opportunity (EEO) system and asked some general questions about the respondents’ work setting and background. Due to the sensitive nature of the information we required, the questionnaire was anonymous. It did not contain any information that could identify an individual respondent. A postcard containing an identification number was included in the package sent to NIH employees. The postcard was to be mailed back to GAO separately from the questionnaire. Receipt of the postcard allowed us to remove names from our mailing list. The questionnaire was first mailed in early January 1994. In late February, we sent out a follow-up mailing, which contained another questionnaire to those in our sample who did not respond to our first mailing. In mid-April, we sent a letter to those who still had not yet responded, urging them to take part in the survey. The questionnaire was designed by a social science survey specialist in conjunction with GAO evaluators who were knowledgeable about the subject matter. We pretested the questionnaire with 15 NIH employees from a number of occupational categories before mailing to help ensure that our questions were interpreted correctly and that the respondents were willing to provide the information required. After the questionnaires were received from survey respondents, they were edited and then sent to be keypunched. All data were double keyed and verified during data entry. The computer program used in the analysis also contained consistency checks. Our study population represents the approximately 13,000 white-collar employees at NIH and excludes staff fellows and contract employees. Since NIH is composed of 26 institutes, centers, and divisions (ICD), we wanted the results of our survey to provide specific estimates for the 5 largest ICDs and a general estimate for the remaining 21 ICDs. In addition, we wanted to look specifically at the experiences of male and female employees in the five largest ICDs and in the other ICDs as a whole. We asked NIH to provide us with a computer file containing the names and home addresses of all NIH employees. From this list, we deleted staff fellows and “blue collar” employees. We used standard statistical techniques to select a stratified random sample from this universe of names. The sample contained 4,110 employees of the universe of 13,473 employees. Table III.1 presents the universe and sample sizes for each stratum. Because this survey selected a portion of the universe for review, the results obtained are subject to some uncertainty or sampling error. The sampling error consists of two parts: confidence level and range. The confidence level indicates the degree of confidence that can be placed in the estimates derived from the sample. The range is the upper and lower limit between which the actual universe estimate may be found. For example, if all female employees of the Clinical Center had been surveyed, the chances are 19 out of 20 that the results obtained would not differ from our sample estimates by more than 5 percent. Not all NIH employees who were sent questionnaires returned them. Of the 4,110 NIH employees who were sent questionnaires, 2,642 returned usable ones to us, an overall usable response rate of 64.3 percent. Table III.2 summarizes the questionnaire returns for the 4,110 questionnaires mailed. The usable response rates for the individual stratum range from 49.5 to 77 percent. Table III.3 presents the response rates for each stratum. Given our overall response rate of 64.3 percent, we wanted to get some indication that the 35.7 percent of our sample that did not respond to our survey were generally similar in their experiences regarding sexual harassment and sex discrimination to those who did respond to the survey. To find this out, in June 1994 we conducted a small-scale, nonstatistical telephone survey of 41 NIH employees who were in our sample but did not respond to the questionnaire. We asked these individuals two questions that were included in the questionnaire. The first was the extent to which they believed sexual harassment was a problem at NIH as a whole and at their ICD. The second was a similar question regarding sex discrimination. Although these 41 employees perceived less sexual harassment and sex discrimination than did the 2,642 employees that responded earlier, the differences in their perceptions were not statistically significant. We decided to not modify the main survey results on the basis of the 41 telephone respondents’ views because the telephone respondents did not form a statistically representative sample and the observed differences were not statistically significant. The 2,642 usable returned questionnaires have been weighted to represent the study population of 13,473 white-collar employees at NIH (excluding staff fellows and contract employees). The weighted total population size for the sample was slightly different (13,460) due to rounding errors introduced in the sample weighting process. Because we sampled a portion of NIH employees, our survey results are estimates of all employees’ views and are subject to sampling error. For example, the estimate that 32 percent of the employees have experienced sexual harassment is surrounded by a 95 percent confidence interval of + 2 percent. This confidence interval thus indicates that there is about a 95-percent chance that the actual percentage falls between 30 and 34 percent. All of the survey results in this report have 95 percent confidence intervals of less than + 5 percent unless otherwise noted. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or in the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in the development of the questionnaire, the data collection, and data analysis for minimizing such nonsampling errors. These steps have been mentioned in various sections of this appendix. There are many different levels at which an EEO situation can be handled before and during the actual EEO complaint process. Employees can involve supervisors and/or other management officials; institute, center, or division (ICD) EEO officers; and others in the pursuit of resolution before filing informal complaint paperwork with NIH’s Office of Equal Opportunity (OEO). Department of Health and Human Services (HHS) officials estimated the cost of processing an informal complaint in NIH’s OEO during fiscal year 1994 to be about $860. If the complaint is not resolved and the employee chooses to file a formal complaint with HHS, an additional $8,700 in costs could be borne by HHS’ Office of Human Relations and NIH’s OEO. This includes the cost of an investigation, which HHS contracts out to an investigative firm. The procedures for handling sexual harassment complaints differ from those established for handling other types of EEO complaints. In order to speed up the process, an investigation is contracted for when an informal complaint has been filed. This shifts the costs for the investigation from the formal to the informal stage. An HHS official said that under this process, total costs (informal and formal) can range from $10,225 to $11,825. Our work did not include an analysis of the difference in cost between the two approaches. It should be noted that these cost estimates cannot be applied to all cases. Each case is unique—a complaint can be resolved at any step in the process or it may involve others outside of the normal EEO process. Also, none of these estimates include costs accrued at the ICD level, lost work time, settlement costs, complaints pursued through processes other than EEO (i.e., grievances), and costs that go beyond the formal complaint stage. NIH attorneys can become involved if the employee chooses NIH’s alternative dispute resolution process before filing an informal complaint. However, the employee can later file an informal complaint if he/she is not satisfied with the outcome. NIH attorneys are also involved in EEO complaints that are appealed to the Equal Employment Opportunity Commission’s (EEOC) Office of Federal Operations if the complainant is not satisfied with the outcome of the formal complaint stage. HHS attorneys and Justice Department officials defend NIH if the complainant decides to appeal the case beyond the EEOC to the court system. Norman A. Stubenhofer, Assistant Director, Federal Management and Workforce Issues Jan E. Bogus, Evaluator-in-Charge Annette A. Hartenstein, Evaluator Michael H. Little, Communications Analyst James A. Bell, Assistant Director, Design, Methodology, and Technical Assistance Group James M. Fields, Senior Social Science Analyst Stuart M. Kaufman, Senior Social Science Analyst Gregory H. Wilmoth, Senior Social Science Analyst George H. Quinn, Jr., Computer Programmer Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the extent and nature of sexual harassment and sex discrimination at the National Institutes of Health (NIH). GAO found that: (1) 32 percent of NIH employees surveyed reported experiencing some form of sexual harassment in the past year, but 96 percent of these employees opted not to file an equal employment opportunity (EEO) complaint or take other personnel action; (2) NIH employees filed 32 informal and 20 formal sexual harassment complaints between October 1990 and May 1994, however no determinations of sexual harassment were made in response to these complaints; (3) about 13 percent of NIH employees believed they had experienced sex discrimination over the last 2 years, but 90 percent of these employees chose not to file grievances or EEO complaints; (4) NIH employees filed 209 informal and 111 formal sex discrimination complaints between October 1990 and May 1994, however no determinations of sex discriminations were made in response to the formal complaints; and (5) although NIH has recently acted to improve its EEO climate, more could be done in the areas of timeliness, information, and training.
We obtained the budget authority and the staffing levels at the State IG office and the budget authority of the State Department for fiscal years 2001 through 2005 by analyzing OMB budget data for those years. Additional information on staff levels and resource distribution were obtained from the State IG to identify trends over this period. We identified audit, inspection, and investigative accomplishments reported by the State IG in semiannual reports to the Congress for fiscal years 2001 through 2005. We did not audit or otherwise verify the dollar amounts of the financial accomplishments reported by the State IG. To review the IG’s audit and inspection oversight coverage of the State Department, we compared the contents of the audits and inspections completed by the State IG in fiscal years 2004 and 2005 with the high-risk areas designated by GAO and with the management and performance challenges identified by the State IG. To review the investigative coverage, we used the investigative accomplishments reported by the State IG to show the level of investigative activity. To obtain information about the quality control process used by the State IG, we obtained an understanding of the internal quality review process used by the IG. We also obtained reports of the most recent external quality peer reviews of the State IG’s audit and investigative activities performed by other IG offices. Due to the lack of a peer review requirement for inspections, we compared the State IG’s inspections with relevant standards related to independence, quality control, and evidence from the PCIE and ECIE Quality Standards for Inspections, 2005 revision, as well as the State IG’s implementing policies and procedures for these standards. We also compared relevant inspection standards with Government Auditing Standards, and compared additional activities of the State IG related to independence with PCIE and ECIE Quality Standards for Offices of Inspector General, revised in October 2003. Specifically, we gained an understanding of the types of documentation and evidence supporting inspection recommendations through a judgmental sample of 10 inspection reports selected from a total of 112 inspection reports issued over fiscal years 2004 through 2005 that were not classified for national security purposes, and that did not include inspections of the Board of Broadcasting Governors. We did not test the reasonableness of the inspection recommendations or otherwise re- perform the inspections. Our sample covered different months, various team leaders, and different State Department locations. Due to the concerns of the State IG about the confidentiality of information sources used to complete the IG’s inspections, we agreed to limit the types of documentation subject to our review. Officials for the IG stated that the documents not provided for our review were memorandums with information from confidential sources. We base our conclusions on the documents and information that we reviewed related to our sample of inspection reports. In those examples where inspection report recommendations lacked documented support, we verified that this was not due to any such limitation to our review. To review the coordination of the State IG with DS, we obtained the annual reports issued by DS and additional information on cases of visa fraud that were investigated during fiscal years 2004 through 2005 from DS reports and a prior GAO report. We also compared the coordination of investigations at the State Department with the practices of other IG offices at the U.S. Postal Service and the Internal Revenue Service. We obtained comments on a draft of this report from the State IG which are reprinted in their entirety in appendix III. A summary of the State IG’s written comments and our response are presented on page 29. We performed our audit from November 2005 through October 2006, in accordance with U.S. generally accepted government auditing standards. The inspection function within the State Department originated in 1906, when the Congress statutorily created a consular inspection corps of five officers to inspect the activities of the U.S. consulates at least once every 2 years. In 1924, the Congress established the Foreign Service to replace the Diplomatic and Consular Service, and required the officers of the newly created Foreign Service to inspect diplomatic and consular branches, as provided under the 1906 Act. The provisions of the 1906 and 1924 acts were repealed by the Foreign Service Act of 1946, which required the Secretary of State to assign Foreign Service officers to inspect the diplomatic and consular establishments of the United States at least once every 2 years. In 1957, the State Department established an Inspector General of Foreign Service, which carried out the inspections of diplomatic and consular offices for the State Department. In 1961, the Congress created a statutory Inspector General in the State Department with duties separate from that of the Inspector General of Foreign Service, which had been established by the State Department. The new inspector general had the statutory responsibility to conduct reviews, inspections, and audits of State Department economic and military assistance programs and the activities of the Peace Corps. Effective July 1, 1978, the statutory IG office created in 1961 was abolished by law and all of the duties of that office were statutorily transferred to the Inspector General of Foreign Service. The newly designated Inspector General of Foreign Service was tasked with carrying out the foreign assistance program review function and the inspections of diplomatic and consular offices that had previously been conducted by the two separate offices. In 1978, GAO reviewed the operations of the Inspector General of Foreign Service and determined that the IG’s inspection reports lacked substance because of the legal requirement for biennial inspections and the exceedingly broad scope and thin coverage of each inspection. GAO recommended that the Congress substitute the requirement for an inspection of each diplomatic and consular post at least every 2 years with a more flexible review schedule. GAO also questioned the independence of Foreign Service officers who were temporarily detailed to the IG’s office and recommended the elimination of this requirement provided by the Foreign Service Act of 1946. In 1980, the Congress again established a statutory IG, this time to act as a centralized unit within the State Department to include the functions of the previous IG of Foreign Service and to perform all audits, inspections, and investigations. Section 209 of the Foreign Service Act of 1980 established the Inspector General of the Department of State and Foreign Service and outlined the authority and functions of that position in specific terms. Section 209 patterned the State Department IG office after similar offices in other agencies under the IG Act, but added functions from the Foreign Service Act of 1946 specific to the State Department. With regard to inspections, the Congress directed the IG to “periodically (at least every 5 years) inspect and audit the administration of activities and operations of each Foreign Service post and each bureau and other operating units of the Department of State.” In 1982, we reviewed the operations of the Inspector General for the Department of State and Foreign Service. In that report, we compared the differences between the Foreign Service Act and the IG Act and noted that the 5-year inspection cycle required by the Foreign Service Act led to problems with the IG’s effectiveness by limiting the ability to do other work. In addition, our report expressed our persistent concerns about independence. These concerns were due, in part, to the IG’s continued use of temporarily assigned Foreign Service officers and other persons from operational units within the department to staff the IG office. Our report also noted that the IG had not established a quality review system to help ensure that the work of the office complied with professional standards, and that the IG used staff from the State Department’s Office of Security, a unit of management, to conduct investigations of fraud, waste, and abuse. We recommended that the Secretary of State work with the IG to establish a permanent IG staff and discontinue its reliance on temporary staff who rotate back to assignments in the Foreign Service or management positions. We also recommended that the Secretary and the IG establish an investigative capability within the IG office to enable it to conduct its own investigations, and to transfer qualified investigators from the Office of Security to the IG for this work. Reacting to concerns similar to those expressed in our 1982 report, the Congress established an IG for the Department of State through amendments to the IG Act in both 1985 and 1986. These amendments designated the State Department as an agency requiring an IG under the IG Act and abolished the previous Office of Inspector General of State and Foreign Service, which was created under section 209 of the Foreign Service Act of 1980. The 1986 Act authorized the State IG to perform all duties and responsibilities, and to exercise the authorities, stated in section 209 of the Foreign Service Act and in the IG Act. The 1986 Act also prohibited a career member of the Foreign Service from being appointed as the State IG. Since 1996, the Congress, through Department of State appropriations acts, annually waives the language in section 209(a) of the Foreign Service Act that calls for every post to be inspected every 5 years. The State IG continues to inspect the department’s approximately 260 posts and bureaus, and international broadcasting installations throughout the world by applying a risk-based approach. To illustrate, the State IG completed inspections at 223 bureaus and posts over the 5-year period of fiscal years 2001 through 2005. These inspections encompass a wide range of objectives, which include reviewing whether department policy goals are being achieved and whether the interests of the United States are being represented and advanced effectively. In addition, the State IG performs specialized security inspections and audits in support of the department’s mission to provide effective protection to its personnel, facilities, and sensitive intelligence information. Therefore, while there is no requirement as a result of the annual waiver that inspections be performed, the State IG continues to conduct inspections as part of its plan for oversight of the department, using a risk-based approach to identify locations for inspections rather than the 5-year requirement. Inspections are defined by the PCIE and ECIE as a process that evaluates, reviews, studies, and analyzes the programs and activities of an agency for the purposes of providing information to managers for decision making; making recommendations for improvements to programs, polices, or procedures; and identifying where administrative action may be necessary. Inspections may be used to provide factual and analytical information; monitor compliance; measure performance; assess the efficiency and effectiveness of programs and operations; share best practices; and inquire into allegations of fraud, waste, abuse, and mismanagement. The IG Act requires the IGs to recommend policies, and to conduct, supervise, or coordinate other activities, in addition to audits and investigations, carried out by the department for the purpose of promoting economy and efficiency, and preventing fraud and abuse in its programs and operations. These requirements of the IG Act are broad enough to cover inspections, which are widely used by the IG community. According to the IG community, inspections provide the benefits of a flexible mechanism for optimizing resources, expanding agency coverage, and using alternative review methods and techniques. In fiscal year 2005, across the federal government, the statutory IGs issued a total of 443 inspection reports compared to a total of 4,354 audit reports, a ratio of inspections to audits of about 1 to 10. As a comparison, the State IG issued 99 inspection reports and 44 audit reports during fiscal year 2005, or a ratio of inspections to audits of over 2 to 1. The State IG currently provides oversight of the Department of State, the Broadcasting Board of Governors, and the foreign affairs community through audits, inspections, and investigations. This work is led by the State Department Inspector General, Deputy Inspector General, Assistant Inspectors General for Audits; Inspections; Management, Policy, and Planning; and Investigations, and a Director for Information Technology. In addition, the State IG has four advisory and support offices, which are the Office of Counsel, Congressional and Public Affairs, Senior Advisor for Security and Intelligence, and Coordinator for Iraq and Afghanistan. (See fig. 1.) From fiscal year 2001 through 2005, the State IG’s overall budget authority went from $29 million to $32 million, which, when expressed in constant dollars, is an increase of approximately 1 percent. (See fig. 2.) Over the same period of time, the State Department’s overall budget authority increased from $13.7 billion in fiscal year 2001 to $22.4 billion in fiscal year 2005, an increase of approximately 50 percent in constant dollars. When compared with other federal IG budgets, the State IG’s ranking in terms of percentage of total agency budgetary resources decreased from eighth (0.21 percent of total agency budgetary resources) to twelfth (0.14 percent of total agency budgetary resources) between fiscal years 2001 and 2005. (See apps. I and II.) The department’s budgetary increases reflect, in part, initiatives in transformational diplomacy, particularly in Iraq and Afghanistan, and substantial increases in programs for counter narcotics, counterterrorism, embassy construction and security, and IT. During the same time period, the State IG’s authorized FTE staff increased from 289 in fiscal year 2001 to 314 in fiscal year 2005; however, during 2005, the IG limited the actual onboard staffing to 191 of the 314 authorized FTEs due to budgetary constraints. This represents a 16 percent reduction of onboard staff when compared to the fiscal year onboard staffing level of 227 in fiscal year 2001. The State IG has reported that its limited resources are further strained by the significant growth in the number of department programs and grants with mandated IG oversight and requests for joint activities with other departments, agencies, and IG offices. For fiscal year 2005, the State IG Office distributed its 191 onboard staff as follows: 38 percent of the staff performing inspections in the Office of Inspections and in the Office of Information Technology, 28 percent in the Office of Audits, 9 percent in the Office of Investigations, and the remaining 25 percent in support positions to address administrative, personnel, legal and other specialized issues. (See fig. 3.) This distribution shows the significant emphasis that the State IG places on inspections in relation to either audits or investigations. Statutory IGs, including the State IG, are required by the IG Act to summarize the activities and accomplishments of their offices and include this information in semiannual reports provided for the Congress. The information includes the number of audit reports issued and the dollar amount of questioned costs, unsupported costs, and funds to be put to better use. As defined by the IG Act, questioned costs include alleged violations of laws, regulations, contracts, grants, or agreements; costs not supported by adequate documentation; or the expenditure of funds for an intended purpose that was unnecessary or unreasonable. In addition, unsupported costs are defined as costs that do not have adequate documentation, and funds to be put to better use are defined as inefficiencies in the use of agency funds identified by the IG. As an illustration of funds to be put to better use, the State IG identified weaknesses in the department’s purchase card program that resulted in untimely purchase card payments that precluded the department from earning rebates from the purchase card provider. During fiscal years 2001 through 2005 the State IG reported that it issued a total of 210 audit reports with total financial accomplishments of approximately $75 million. This included $37.1 million in questioned costs of which $17.9 million were unsupported costs, and $38 million in funds to be put to better use. The investigative activity reported over the same 5- year period included 252 cases closed and financial accomplishments of $29.4 million in judicial recoveries, $17.6 million in court-ordered fines, and $11.5 million in court-ordered restitutions. In addition, the State IG reported that its investigations resulted in 92 prosecutorial referrals, 53 indictments, 52 convictions, and 42 criminal sentences. Over the same 5-year period the State IG reported that it had issued 461 inspection reports. The State IG’s semiannual reports include summarized results of its inspection activity even though this information is not specifically required by the IG Act. The results vary from identification of weaknesses in operations to recommendations for proper implementation of State Department policies. There were no significant monetary results reported from the State IG’s inspections. The State IG provides oversight coverage of the department primarily through a combination of audits and inspections, with, as shown earlier, a heavier emphasis on inspections. Although the Congress annually waives the requirement to conduct inspections under section 209(a) of the Foreign Service Act, State IG officials told us that State Department management encourages the IG inspections and have found the results very significant and useful. Therefore, the IG continues to plan for inspections on a cyclical basis using a risk-based approach. As a result, over the 5-year period of fiscal years 2001 through 2005, the IG completed inspections at 223 of the 260 department bureaus and posts. We also analyzed the State IG’s coverage of the areas designated as high risk by GAO and the significant management challenges identified by the State IG. Since 1990, we have periodically reported on government operations, including those of the State Department that we have designated as high risk because of their greater vulnerabilities to fraud, waste, abuse, and mismanagement. In addition, the IGs began the identification of management challenges in 1997 at the request of congressional members who asked the IGs to identify the most serious management problems in their respective agencies. This began a yearly process that continues as a result of the Reports Consolidation Act of 2000. The act requires executive agencies, including the State Department, to include their IGs’ lists of significant management challenges in their annual performance and accountability reports to the President, OMB, and the Congress. In our most recent reports of government high-risk areas issued in January 2003 and January 2005, we identified seven such areas at the State Department. These high-risk areas were also included in management challenges identified by the State IG. (See table 1.) Each year the State IG’s Office of Inspections includes the management challenges identified by the IG as areas of emphasis in inspections of the department’s bureaus or missions. Some areas of emphasis may be applicable only to embassies and other missions, while other areas of emphasis may be applicable only to domestic entities such as bureaus, offices, and other units. In our review of the issues addressed by the State IG’s audit and inspection reports for fiscal years 2004 and 2005, we determined that the State IG had provided oversight of all identified high-risk areas and management challenges largely through inspections. The State IG inspectors use a questionnaire during each inspection to compile the information regarding the areas of emphasis including management challenges identified by the IG. Each questionnaire can cover numerous areas of emphasis, including several management challenges. Therefore, while the State IG issued a total of 203 inspection reports over fiscal years 2004 through 2005, these inspections addressed 605 management challenges in the various posts, bureaus, and offices reviewed. In addition, the State IG relies almost exclusively on the results of inspections, as compared to audits, to cover the four high-risk areas and management challenges related to human resources, counterterrorism, public diplomacy and information security. (See table 1.) To illustrate, for fiscal years 2004 and 2005 combined, the State IG covered human resource issues with 1 audit and 103 inspections, counterterrorism and border security with 2 audits and 190 inspections, public diplomacy with 2 audits and 103 inspections, and information security with 1 audit and 13 inspections. (See table 1.) In contrast, over the same 2-year period, the State IG issued 88 audit reports and each addressed a single management challenge. For example, in the high-risk areas and management challenges of physical security, the State IG provides coverage mostly through inspections but includes audits that address specific contracts and procurements for the purchase of equipment and services. Also, while the State IG’s inspections obtain financial information at the department’s bureaus and posts, the high-risk areas and management challenges of financial management are covered almost exclusively by the State IG’s financial audits. Due to the significance of the high-risk areas covered largely by inspections, the State IG would benefit by reassessing the mix of audit and inspection coverage for those areas. There are fundamental differences between inspections and audits. The PCIE and ECIE developed Quality Standards for Inspections in 1993, and revised them in 2005, to provide a framework for performing inspections. There are similarities between these inspection standards and the Government Auditing Standards required by the IG Act for audits; but there are fundamental differences as well. Both standards require that (1) staff be independent, (2) evidence for reported results be documented, and (3) the elements of a finding—criteria, condition, cause, and effect— be included with the reported results. A fundamental difference between audits and inspections is the level of detail and requirements in the areas of sufficient, appropriate evidence to support findings and conclusions, and the levels of documentation of evidence needed to support findings, conclusions, and recommendations. Audits performed under Government Auditing Standards, by design, are subject to more depth in the requirements for levels of evidence and documentation supporting the findings than inspections performed under the inspection standards. In addition, while auditing standards require external quality reviews of auditing practices, or peer reviews, on a 3-year cycle by reviewers independent of the State IG’s office, neither the inspection standards nor the State IG’s policies and procedures require such external reviews of inspections. We reviewed the documentation for 10 inspections to gain an understanding of the extent of documented evidence in the inspectors’ working papers to support each report’s recommendations. The 10 inspections were taken from a total of 112 inspections, completed over fiscal years 2004 and 2005 that were not classified for national security purposes, and for which we had access. The reports for the 10 inspections included a total of 183 recommendations. We found that the inspectors relied heavily on questionnaires completed by the staff at each bureau or post that was inspected, official State Department documents, correspondence and electronic mail, internal department memos including those from the Secretary, interview memorandums, and the inspectors’ review summaries. We did not find additional testing of evidence or sampling of agency responses to test for the relevance, validity, and reliability of the evidence as would be required under auditing standards. We also found that for 43 of the 183 recommendations contained in the 10 inspections we reviewed, the related inspection files did not contain documented support beyond written summaries of the findings and recommendations. While the State IG’s inspection policies for implementing the PCIE and ECIE inspection standards require that supporting documentation be attached to the written summaries, the summaries indicated that there was no additional supporting documentation. The State IG has quality assurance processes that cover its three main lines of work: (1) audits, (2) investigations, and (3) inspections. Independence is a key element that should permeate all of the IG’s major lines of work. For audits, Government Auditing Standards requires an appropriate internal quality control system and an external peer review of audit quality every 3 years. These standards specify that quality control systems should include procedures for monitoring, on an ongoing basis, whether the policies and procedures related to the standards are suitably designed and are being effectively applied. For investigations, the Homeland Security Act of 2002 amended the IG Act to require that each IG office with investigative or law enforcement authority under the act have its investigative function reviewed periodically by another IG office and that the results be communicated in writing to the IG and to the Attorney General. For inspections, PCIE and ECIE inspection standards provide guidance for quality control and include a requirement for ongoing internal quality inspection, but they do not contain a requirement for an external quality review, or peer review. Following is a summary of recent quality reviews of the State IG’s audit, inspection, and investigative work: Audits. Peer reviews provide an independent opinion on the quality control system related to audits. The State IG has obtained two external peer reviews of its audit practice from other IG offices since the beginning of fiscal year 2001 and obtained an unqualified, or “clean,” opinion in each review. Both peer reviews concluded that the State IG’s quality control system for the audit function had been designed in accordance with professional auditing standards. In addition, the most recent peer review completed by the Department of the Interior IG in 2004, provided useful suggestions for improvement. The most significant suggestion was for the State IG to establish ongoing internal quality reviews of the audit function as required by professional auditing standards. While the State IG did conduct internal quality reviews for its audit practice that were completed in May 2001 and March 2003, the Interior IG found that the reviews were not the result of an ongoing process. To address the peer review’s suggestion, the State IG established the Policy, Planning, and Quality Assurance Division in November 2005 under the Assistant IG for Audits, to conduct internal reviews and provide summary reports on a semiannual basis, which we view as a very positive action to help ensure ongoing audit quality. Investigations. The State IG obtained the results of the first external quality review of its investigations from the Tennessee Valley Authority (TVA) IG on November 16, 2005. The TVA IG used the PCIE Quality Standards for Investigations, the Quality Assessment Review guidelines established by the PCIE, and the Attorney General Guidelines For Offices Of Inspector General With Statutory Law Enforcement Authority to review the quality of the State IG’s investigations. The TVA IG concluded that the State IG’s system of internal safeguards and management procedures for the investigative function was in full compliance with quality standards established by the PCIE and the Attorney General’s guidelines, and provides reasonable assurance of conforming with professional standards. The reviewers also suggested improvements for the State IG and these are currently being addressed by the Assistant IG for Investigations. Inspections. An external quality review, or peer review, of the State IG’s inspections is not required under the inspection standards. During our review, the State IG implemented a plan for conducting an internal quality review of inspections as called for by the PCIE and ECIE inspection standards. The first such review was currently being conducted at the time of our audit, but the report on inspection quality had not yet been completed. This review includes a sample of completed inspections to determine whether they meet the PCIE and ECIE quality inspection standards. Currently, the State IG’s quality review does not include inspections by the Office of Information Technology, and at the time of our review there was no internal quality review process for IT inspections. Because the inspection work of the IG’s IT office is used, at least in part, by the department to ensure its compliance with the requirements of the Federal Information Security Management Act of 2002 (FISMA) for effective information security controls, the quality of the IT inspections is critical to the department for providing overall assurance of FISMA compliance. Inspection quality is critical also because of the State IG’s almost exclusive reliance on inspections to cover the information security area, which has been identified by GAO as high-risk and by the State IG as a management challenge for the department. Independence. Independence is an overarching element that is critical to quality and credibility across all of the work of the State IG and is fundamental to Government Auditing Standards and the IG Act. Quality Standards for Federal Offices of Inspector General, updated by the PCIE and ECIE in October 2003, also addresses independence in its quality standards for the management, operations, and conduct of federal IG offices. Both sets of standards recognize that IG independence is a critical element of the IG’s obligation to be objective, impartial, intellectually honest, and free of conflicts of interest. Also, consistent with Government Auditing Standards, the IG quality standards for IG offices state that without independence both in fact and in appearance, objectivity is impaired. In addition, the PCIE and ECIE Quality Standards for Inspections require that the inspections organization and each individual inspector to be free both in fact and appearance from impairments to independence. Two areas of continuing concern regarding independence are (1) the temporary appointment of management personnel with various titles such as Deputy IG, Acting IG, or Acting Deputy IG, to head the State IG office; and (2) the use of Foreign Service staff to lead State IG inspections. For example, between the last two presidentially appointed IGs—covering a period of over 2 years from January 24, 2003, until May 2, 2005—all four of those heading the State IG office in an acting IG capacity were selected from State Department management staff and temporarily employed in the State IG office. These individuals had served in the Foreign Service in prior management positions, including as U.S. ambassadors to foreign countries. In addition, three of these individuals returned to significant management positions within the State Department after heading the State IG office. Table 2 shows prior and subsequent positions held by those heading the State IG office for a recent 27-month period until the current IG was confirmed on May 2, 2005. This use of temporarily assigned State Department management staff to head the State IG office can affect the perceived independence of the entire office in its reviews of department operations, and the practice is not consistent with (1) independence requirements of Government Auditing Standards, (2) other professional standards followed by the IGs, and (3) the purpose of the IG Act. Career members of the Foreign Service are prohibited by statute from being appointed as State IG. This exclusion of career Foreign Service staff from consideration when appointing the State IG avoids the personal impairments to independence that could result when reviewing the bureaus and posts of fellow Foreign Service officers and diplomats. The same concern with independence arises when career Foreign Service officers and diplomats temporarily head the State IG office in an acting IG capacity. In addition to the potential independence impairment of acting IGs, the State IG can impair its independence with its reliance on staff in the Foreign Service temporarily employed by the IG office to lead inspections. As a condition of their employment, Foreign Service staff are expected to help formulate, implement, and defend government policy, which severely limits the appearance of objectivity when reviewing department activities that may require them to question official policies. The State IG’s inspection policy is for Foreign Service staff with the rank of ambassador, or other staff who serve at the ambassador level, to lead inspections. Long-serving State IG officials told us that the program knowledge of these State Department officials is important when reviewing the department’s bureaus and posts. In the 112 inspections completed during fiscal years 2004 and 2005 for which we had access, 79 had team leaders who held the rank of ambassador or served at the ambassador level. Foreign Service staff on these inspections often move through the State IG office on rotational assignments to serve again in Foreign Service positions for the department after working for the State IG. For example, 9 of the 22 Foreign Service officials who were assigned to these 112 inspections as either staff or team leaders had transferred or returned to management offices in the State Department by December 2005. The State IG’s use of career Foreign Service staff and others at the ambassador level to lead inspections provides a potential impairment to independence. In both our 1978 and 1982 reports we reported concerns about the independence of inspection staff reassigned to and from management offices within the department. In these prior reports, we stated that the desire of State IG staff to receive favorable assignments after their State IG tours could influence their objectivity. While they may offer valuable insights from their experience in the department, we believe that there is considerable risk that independence could be impaired, resulting in a detrimental effect on the quality of State IG inspections and the effectiveness of the State Department, and that it is important that officials not sacrifice independence, in fact or appearance, for other factors in the staffing and leadership of the IG’s office. As an alternative, such staff could provide the benefits of their experience and expertise as team members rather than team leaders without impairing the inspection team’s independence. The IG Act established the State IG to conduct and supervise independent audits and investigations that prevent and detect fraud, waste, abuse, and mismanagement in the State Department. The Bureau of Diplomatic Security (DS)—as part of its worldwide responsibilities for law enforcement and security operations—also performs investigations of passport and visa fraud both externally and within the department. Currently, there is no functional written agreement or other formal mechanism in place between DS and the State IG to coordinate their investigative activities. DS assigns special agents to U.S. diplomatic missions overseas and to field offices throughout the United States. The special agents conduct passport and visa fraud investigations and are responsible for security at 285 diplomatic facilities around the world. This effort currently entails a global force of approximately 32,000 special agents, security specialists, and other professionals who make up the security and law enforcement arm of the State Department. In fiscal year 2004, DS reported that it opened 5,275 new criminal investigations and made 538 arrests for passport fraud, 123 for visa fraud, and 54 for other offenses. For fiscal year 2005, DS reported 1,150 arrests combined for passport and visa fraud. Both the State IG and DS pursue allegations of passport and visa fraud by State Department employees. State IG officials stated that they were aware of DS investigations in these areas that were not coordinated with the State IG. Without a formal agreement to outline the responsibilities of both DS and the State IG regarding these investigations, there is inadequate assurance that this work will be coordinated to avoid duplication or that independent investigations of department personnel will be performed. Also, because DS reports to the State Department’s Undersecretary for Management, DS investigations of department employees, especially when management officials are the subjects of the allegations, can result in management investigating itself. In other agencies where significant law enforcement functions like those at DS exist alongside their inspectors general, the division of investigative functions between the agency and the IG is established through written agreements. For example, the U.S. Postal Service has the Postal IG established by the IG Act and the Chief Postal Inspector who is head of the U.S. Postal Inspection Service with jurisdiction in criminal matters affecting the integrity and security of the mail. Postal inspectors investigate postal crimes and provide security for the protection of postal employees at 37,000 postal facilities throughout the country. In 2006, a memorandum was signed by the Chairman of the Board of Governors and the Postmaster General announcing the completion of the transfer of investigative jurisdiction for postal employees from the Postal Inspection Service to the Office of Inspector General. The Postal IG was recognized as having full responsibility for the investigation of internal crimes, whereas the Postal Inspection Service is responsible for security and the investigation of external crimes. This agreement also included a shift of resources between both organizations to cover their responsibilities. In another example, the Internal Revenue Service Criminal Investigation (IRS CI) and the Treasury Inspector General for Tax Administration (TIGTA) have signed a memorandum of understanding that recognizes IRS CI’s responsibility to investigate criminal violations of the tax code while TIGTA has the responsibility to protect the IRS against attempts to corrupt or threaten IRS employees, and to investigate violations by IRS employees. This agreement includes the coordination of investigative activities between these offices and recognizes TIGTA as the final authority to investigate IRS Criminal Investigation employees. Agreements such as those crafted by the U.S. Postal Service and the IRS can serve as models for a formal agreement between DS and the State IG. The State IG relies heavily on inspections instead of audits for oversight of high-risk areas and management challenges. Areas such as human resources, counterterrorism, public diplomacy, and information security, are almost exclusively covered through inspections. By design, inspections are conducted under less in-depth requirements than audits performed under Government Auditing Standards in terms of levels of evidence and documentation to support findings and recommendations. Federal IGs use inspections as an important oversight tool along with audits and investigations, which are specifically required by the IG Act. However, the State IG conducts a much greater proportion of inspections to audits than the federal statutory IG community as a whole. Due to the risk and significance of the high-risk areas being covered largely by inspections, the State IG would benefit from reassessing risk and its heavy reliance on inspections in those areas to determine whether the current mix of audits and inspections provides the amount and type of oversight coverage needed. Given the important role that inspections currently play in the State IG’s oversight of the department, assurance of inspection quality is also important. The State IG is currently conducting its first internal review of inspections, but the results are not yet reported and the review does not include IT inspections. Independence is a critical element to the quality and credibility of an IG’s work under the IG Act and is fundamental to Government Auditing Standards and professional standards issued by the PCIE and ECIE. Based on our current concerns and those from our past reports, we believe that the State IG would benefit from additional policies and revised structures in order to avoid situations that raise concerns about independence, such as State Department management officials appointed to head the State IG office in an acting IG capacity, and State Department career Foreign Service staff and others who transfer from or return to department management offices leading IG inspections. Such policies and structures would be geared toward (1) providing for independent acting IG coverage in the event of delays between IG appointments and (2) assuring that State IG inspections are not led by career Foreign Service or other staff who move to assignments within State Department management. With regard to ambassadors, Foreign Service officers, and other rotational staff leading inspections, approaches could range from the State IG limiting its inspection activities to a level that is supportable without reliance on staff that routinely rotate to management offices, to permanently transferring or hiring additional staff, or FTEs, along with associated resources for the State IG office to eliminate the need to rely on Foreign Service and other rotational staff to conduct inspections. In addition, the State IG’s inspection teams could include experienced ambassadors and Foreign Service officers at the ambassador level as team members rather than team leaders to help mitigate concerns regarding the lack of an appearance of independence caused by the State IG’s current practice. Finally, there is a need for a formal agreement between the State IG and the State Department Bureau of Diplomatic Security to coordinate their investigative activities to help ensure the independence of investigations of the State Department’s management staff and to prevent duplication. To help ensure that the State IG provides the appropriate breadth and depth of oversight of the State Department’s high-risk areas and management challenges, we recommend that the State IG reassess the proper mix of audit and inspection coverage for those areas. This reassessment should include input from key stakeholders in the State Department and the Congress and also entail an analysis of an appropriate level of resources needed to provide adequate IG coverage of high-risk and other areas in light of the increasing level of funding provided to the State Department. To provide for more complete internal quality reviews of inspections we recommend that the State IG include inspections performed by the State IG’s Office of Information Technology in its internal quality review process. To help ensure the independence of the IG office, we recommend that the State IG work with the Secretary of State to take the following actions: Develop a succession planning policy for the appointment of individuals to head the State IG office in an acting IG capacity that is consistent with the IG Act regarding State IG appointment and provides for independent coverage in the event of delays between IG appointments. The policy should prohibit career Foreign Service officers from heading the State IG office in an acting IG capacity and specify within the IG’s own succession order that acting IG vacancies are to be filled by eligible personnel without State Department management careers. Develop options to ensure that State IG inspections are not led by career Foreign Service officials or other staff who rotate to assignments within State Department management. Approaches could range from the State IG limiting its inspection activities to a level that is supportable without reliance on staff who routinely rotate to management offices, to permanently transferring or hiring additional staff, or FTEs, along with associated resources for the State IG office to eliminate the need to rely on Foreign Service and other rotational staff to lead inspections. In order to provide for independent investigations of State Department management and to prevent duplicative investigations, we recommend that the State IG work with the Bureau of Diplomatic Security, the Office of Management, and the Secretary of State to develop a formal written agreement that delineates the areas of responsibility for State Department investigations. Such an agreement would, for example, address the coordination of investigative activities to help ensure the independence of internal departmental investigations and preclude the duplication of efforts. In comments on a draft of this report the State IG provided additional clarifying information and acknowledged that our review helped identify areas for improvement. With respect to our five recommendations in the draft report, the State IG agreed with two recommendations, partially agreed with one, and disagreed with two others. We are reaffirming our recommendations and provide our reasons below. With respect to the two recommendations for which there is agreement, the State IG agreed with our recommendations to (1) include all IG inspections, including inspections performed by the Office of Information Technology, in the internal quality review process and (2) work with DS and others to develop a written agreement delineating the areas of responsibility for department investigations. The State IG disagreed with our recommendation to reassess the mix of audit and inspection coverage stating that “simply reassessing the mix of audit and inspection coverage will accomplish little if there are not more auditors and more resources available to perform audits.” Our recommendation provides the IG with a way to define the appropriate level of oversight of the department, reallocate current resources as appropriate, and justify any additional resources that may be necessary. This reassessment is especially important given the increased appropriations provided to the State Department and in light of the fact that the current mix of audits and inspections has evolved over a number of years when State Department management personnel served as acting IGs. By achieving a proper mix of audits and inspections the State IG can help maximize the use of the department’s resources through more effective oversight. In addition, the State IG’s comments do not recognize the potential for reallocating inspection staff to an audit role. Under Government Auditing Standards, the current inspection staff may also conduct performance audits in order to provide both a forward-looking analysis and a review of historical performance. Redesigning some inspections as performance audits to be performed by the current inspection staff could meet the needs of management for inspection results at the department’s bureaus and posts and also provide the level of objectivity and evidence needed to assess high-risk areas and management challenges. Thus, for all the above stated reasons we continue to recommend that the State IG reassess the mix of audit and inspection coverage. The State IG does not disagree with our concerns about Foreign Service officers temporarily heading the IG office in an acting capacity, but believes that the recommendation goes too far by limiting the pool of eligible candidates to personnel without State Department management careers. We disagree with the IG’s comments due to the importance of independence which is the most critical element for IG effectiveness and success. To preserve IG independence, the IG Act requires that the IG not report to or be subject to supervision by any other officer other than the Secretary, or if delegated, the Deputy Secretary. Appointing career department managers as acting State IGs would effectively have the IG office subject to supervision by a management official other than the Secretary or Deputy Secretary. Therefore, we continue to recommend that the State IG exclude from consideration in the succession planning of his office department officials with management careers due to possible conflicts of interest and resulting independence issues. As alternatives, the State IG could consider PCIE recommendations for personnel to fill future acting IG positions, as well as IG staff with proven ability from other agencies. In addition, we have revised the recommendation in our report to clarify that the intended action is directed to the succession planning activities of the State IG’s office in order to avoid any unintended conflict with the Vacancies Act, which gives the President wide authority to appoint personnel to acting positions throughout the executive branch of the federal government. The State IG acknowledged that ambassadors who serve as team leaders for inspections raise a concern about the appearance of independence. The State IG also believes that this concern is significantly outweighed by the overriding need for people with the experience and expertise of ambassadors to lead inspections. We disagree with putting independence second to experience and expertise and believe that the State IG can achieve both objectives with the proper staffing and structuring of its inspections. Our position remains that the State IG’s inspection teams should not be led by career Foreign Service officers and ambassadors, but could include experienced ambassadors and staff at the ambassador level as team members rather than team leaders to help mitigate concerns about the appearance of independence caused by the State IG’s current practice. Therefore, we continue to recommend that the State IG work with the Secretary of State to develop options to ensure that the IG’s inspections are not led by career Foreign Service officials, including ambassadors or other staff who rotate from State Department management. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time we will send copies of the report to the Secretary of State, the State Department IG, the State Department Undersecretary for Management, the State Department Assistant Secretary for Diplomatic Security, the OMB Deputy Director for Management, the Chairman and Ranking Member of the Senate Committee on Foreign Relations, other congressional committees, and interested parties. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions or would like to discuss this report please contact me at (202) 512-9471 or by e-mail at franzelj@gao.gov. Major contributors to this report are listed in appendix IV. Corporation for National and Community Service Treasury Inspector General for Tax Administration National Aeronautics and Space Administration Tennessee Valley Authority (TVA) Department of Housing and Urban Development Department of Health and Human Services n.a. n.a. n.a. n.a. n.a. n.a. Corporation for National and Community Service Treasury Inspector General for Tax Administration National Aeronautics and Space Administration Department of Housing and Urban Development Tennessee Valley Authority (TVA) Department of Health and Human Services n.a. n.a. n.a. Information not available.
GAO was asked to review the Department of State Office of Inspector General (State IG) including its (1) organization, budget levels, and accomplishments; (2) audit and inspection coverage of the department; (3) role of inspections in the oversight of the department; (4) quality assurance process including assurance of independence; and (5) coordination of State IG investigations with the State Department's Bureau of Diplomatic Security. GAO obtained information from State IG reports, interviews, and documentation for a sample of inspections. The State IG provides oversight of the State Department, the Broadcasting Board of Governors, and the foreign affairs community, including the approximately 260 bureaus and posts around the world, through financial and performance audits, inspections, and investigations. Over fiscal years 2001 through 2005, in terms of constant dollars, the State IG's budget has increased by 1 percent while the State Department's overall budget has increased by 50 percent. This represents a relative decrease when comparing State IG with other agencies' ratios of IG budget to total agency budget. The State IG provides oversight coverage of the areas designated as high-risk by GAO and management challenges identified by the IG, with a heavy emphasis on inspections. The State IG covers the high-risk areas of human resources, counterterrorism, public diplomacy, and information security, almost exclusively through inspections. In fiscal year 2005, the State IG's ratio of inspections to audits was over two to one, while the federal statutory IGs had a combined ratio of one inspection to every ten audits. There are fundamental differences between inspections and audits. By design, audits performed under Government Auditing Standards are subject to more in-depth requirements for the levels of evidence and the documentation supporting the findings than are inspections performed under inspection standards. Due to the significance of the high-risk areas covered largely by inspections, the State IG would benefit by reassessing the mix of audit and inspection coverage of those areas. The State IG's audit and investigative functions both had recent peer reviews of quality assurance that resulted in "clean opinions." There is no requirement for a peer review of inspections; however, during our audit the State IG began an internal quality review process for inspections but did not include reviews of information technology inspections. Independence is critical to the quality and credibility of all the work of the State IG. Two areas of continuing concern that we have with the independence of the State IG involve (1) the temporary appointment of State Department management personnel to head the State IG office in an acting IG capacity and who subsequently return to management positions, and (2) the rotation of Foreign Service staff to lead IG inspections, including many who, along with other IG staff, move to positions in department management offices. Such staffing arrangements represent potential impairments to independence and the appearance of independence under professional standards applicable to the IGs. Both the State IG and the State Department's Bureau of Diplomatic Security pursue allegations of fraud by department employees. There is no functional written agreement in place to help ensure the independence of internal departmental investigations and preclude the duplication of efforts.
Medicare covers medically necessary ambulance services when no other means of transportation to receive health care services is appropriate, given the beneficiary’s medical condition at the time of transport. Medicare pays for both emergency and nonemergency ambulance transports that meet the established criteria. To receive Medicare reimbursement, providers of ambulance services must also meet vehicle and crew requirements. Transport in any vehicle other than an ambulance—such as a wheelchair or stretcher van—does not qualify for Medicare payment. Medicare pays for different levels of ambulance services, which reflect the staff training and equipment required to meet the patient’s needs. Basic life support (BLS) is provided by emergency medical technicians (EMT). Advanced life support (ALS) is provided by paramedics or EMTs with advanced training. ALS with specialized services is provided by the same staff as standard ALS but involves additional equipment. Currently, Medicare uses different payment methods for hospital-based and freestanding ambulance providers. Hospital-based providers are paid based on their reasonable costs. For freestanding providers, Medicare generally pays a rate based on reasonable charges, subject to an upper limit that essentially establishes a maximum payment amount. Freestanding providers can bill separately for mileage and certain supplies. Between 1987 and 1995, Medicare payments to freestanding ambulance providers more than tripled, from $602 million to almost $2 billion, rising at an average annual rate of 16 percent. Overall Medicare spending during that same time increased 11 percent annually. From 1996 through 1998, payments to freestanding ambulance providers stabilized at about $2.1 billion. BBA stipulated that total payments under the fee schedule for ambulance services in 2000 should not exceed essentially the amount that payments would have been under the old payment system. This requirement is known as a budget neutrality provision. In 1997, 11,135 freestanding and 1,119 hospital-based providers billed Medicare for ground transports. The freestanding providers are a diverse group, including private for-profit, nonprofit, and public entities. They include operations staffed almost entirely by community volunteers, public ventures that include a mix of volunteer and professional staff, and private operations using paid staff operating independently or contracting their services to local governments. In our July 2000 report, we noted that about 34 percent were managed by local fire departments. In several communities a quasi-government agency owned the ambulance equipment and contracted with private companies for staff. The majority of air ambulance transports are provided by hospital-based providers. An estimated 275 freestanding and hospital-based programs provide fixed-wing and rotor-wing air ambulance transports, which represent a small proportion (about 5 percent) of total ambulance payments. In our July 2000 report, we noted that several factors characterizing rural ambulance providers may need consideration in implementing an appropriate payment policy. These include: High per-transport costs in low-volume areas. Compared to their urban and suburban counterparts, rural ambulance providers have fewer transports over which to spread their fixed costs because of the low population density in rural areas. Yet, rural providers must meet many of the same basic requirements as other providers to maintain a responsive ambulance service, such as a fully equipped ambulance that is continually serviced and maintained and sufficient numbers of trained staff. As a result, rural providers that do not rely on volunteers generally have higher per-transport costs than their urban and suburban counterparts. Longer distances traveled. A common characteristic of rural ambulance providers is a large service area, which generally requires longer trips. Longer trips increase direct costs from increased mileage costs and staff travel time. They also raise indirect costs because ambulance providers must have sufficient backup services when vehicles and staff are unavailable for extended periods. Current Medicare payment policy generally allows freestanding providers to receive a payment for mileage. Nevertheless, mileage-related reimbursement issues, such as the amount paid for mileage, represent a greater concern to rural providers because of the longer distances traveled. Lack of alternative transportation services. Rural areas may lack alternative transport services, such as taxis, van services, and public transportation, which are more readily available in urban and suburban areas. This situation is complicated by the fact that some localities require ambulance providers to transport in response to an emergency call, even if the severity of the problem has not been established. Because of this situation, some providers transport a Medicare beneficiary whose need for transport does not meet Medicare coverage criteria and must therefore seek payment from the beneficiary or another source. Reliance on Medicare revenue. Medicare payments account for a substantial share of revenue for rural ambulance providers that bill Medicare. Among rural providers, 44 percent of their annual revenue in 1998, on average, was from Medicare, compared to 37 percent for urban providers, according to Project Hope Center for Health Services, a nonprofit health policy research organization. Additionally, for some rural providers, other revenue sources—such as subsidies from local tax revenues, donations, or other fundraising efforts—have not kept pace with increasing costs of delivering the services. Decreasing availability of volunteer staff. Rural ambulance providers traditionally have relied more heavily on volunteer staff than providers in urban or suburban areas. Some communities having difficulty recruiting and retaining volunteers may have had to hire paid staff, which increases the costs of providing services. Medicare’s proposed fee schedule, published in September 2000, reduces the variation in maximum payment amounts to similar providers for the same type of services. The considerable variation that exists in the current payment system does not necessarily reflect expected differences in provider costs. For example, in 1999, the maximum payments for two types of emergency transport—one requiring no specialized services and the other requiring specialized services—were the same in Montana at $231 for freestanding providers. In North Dakota, the maximum payment was about $350 and also did not differ measurably for the two types of transport services. In contrast, South Dakota’s maximum payment for the less intensive transport was $137, which was $30 lower than the payment for the transport requiring specialized services. Per-mile payments also varied widely. For example, in rural South Dakota, the payment was just over $2 per mile, compared to $6 per mile in rural Wyoming. The shift to the proposed fee schedule would narrow the wide variation in payments to ambulance providers for similar services. The proposed schedule includes one fee for each level of service. This fee is not expected to vary among providers except for two possible adjustments— one for geographic wage and price differences and the other based on the beneficiary’s location, rural or urban. As a result, a national fee schedule is likely to provide increased per-trip payments to those providers that under the current system receive payments considerably below the national average and decreased payments to providers with payments that have been substantially above the national average. As part of its mandate, the negotiated rulemaking committee was directed to consider the issue of providing essential ambulance service in isolated areas. The committee recommended a rural payment adjustment to recognize higher costs associated with low-volume providers to ensure adequate access to ambulance services. Consistent with the committee’s recommendation, the proposed fee schedule includes an additional mileage payment for the first 17 miles for all transports of beneficiaries in rural areas. The mileage payment adjustment, however, treats all providers in rural areas identically and does not specifically target providers that offer the only ambulance service for residents in the most isolated areas. As a result, some providers may receive the payment adjustment when they are not the only available source of ambulance service, so the adjustment may be too low for the truly isolated providers. In addition, the proposed rural adjustment is tied to the mileage payment rather than the base rate and, therefore, may not adequately help low- volume providers. Such providers may not have enough transports to enable them to cover the fixed costs associated with maintaining ambulance service. The per-mile cost would not necessarily be higher with longer trips. It is the base rate, which is designed to pay for general costs such as staff and equipment—and not the mileage rate—that may be insufficient for these providers. For that reason, adjusting the base rate rather than the mileage rate would better account for higher per-transport fixed costs. In response to our 2000 report, HCFA stated that it intends to consider alternative adjustments to more appropriately address payment to isolate, essential, low-volume rural ambulance providers. Whether or not a claim for ambulance transport is approved varies among carriers, and these discrepancies can translate into unequal coverage for beneficiaries. In 1998, between 9 percent and 26 percent of claims for payment of emergency and nonemergency ambulance transports were denied among the nine carriers that processed two-thirds of all ambulance claims. Different practices among carriers, including increased scrutiny due to concerns about fraud, may explain some of the variation in denial rates. Following are other inconsistencies in carrier practices cited in our July 2000 report that may help explain denial rate differences: National coverage policy exists only for some situations. Generally, Medicare coverage policies have been set by individual carriers rather than nationally by HCFA. For example, in 1998, the carrier covering ambulance providers in New Jersey and Pennsylvania reimbursed transports at ALS levels where local ordinances mandated ALS as the minimum standard of care for all transports. In contrast, the carrier for an ambulance provider in Fargo, North Dakota, reduced many of the provider’s ALS claims to BLS payment rates, even though a local ordinance required ALS services in all cases. (The carrier’s policy has since changed.) Some carriers were found to have applied criteria inappropriately, particularly for nonemergency transports. For example, for Medicare coverage of a nonemergency ambulance transport, a beneficiary must be bed-confined. In the course of our 2000 study, we found one carrier that processed claims for 11 states applied bed-confined criteria to emergency transports as well as those that were nonemergency. (The carrier’s policy has since changed.) Providers were concerned that carriers sometimes determined that Medicare will cover an ambulance claim based on the patient’s ultimate diagnosis, rather than the patient’s condition at the time of transport. Medicare officials have stated that the need for ambulance services is to be based on the patient’s medical condition at the time of transport, not the diagnosis made later in the emergency room or hospital. Ambulance providers are required to transport beneficiaries to the nearest hospital that can appropriately treat them. Carriers may have denied payments for certain claims because they relied on inaccurate survey information specifying what services particular hospitals offer when determining whether a hospital could have appropriately served a beneficiary. However, the survey information does not always accurately reflect the situation at the time of transport, such as whether a bed was available or if the hospital was able to provide the necessary type of care. Some providers lacked information about how to fill out electronic claims forms correctly. Volunteer staffs in particular may have had difficulty filing claims, as they often lacked experience with the requirements for Medicare’s claims payment process. An improperly completed claim form increases the possibility of a denial. Claims review difficulties are exacerbated by the lack of a national coding system that easily identifies the beneficiary’s health condition to link it to the appropriate level of service (BLS, ALS,or ALS with specialized services). As a result, the provider may not convey the information the carrier needs to understand the beneficiary’s medical condition at the time of pickup, creating a barrier to appropriate reimbursement. Medicare officials have stated that a standardized, mandated coding system would be helpful and the agency has investigated alternative approaches for implementing such a system. The agency contends that using standardized codes would promote consistency in the processing of claims, reduce the uncertainty for providers regarding claims approval, and help in filing claims properly. Overall, the proposed fee schedule will improve the equity of Medicare’s payment for ambulance providers. Payments will likely increase for providers that now receive payments that are lower than average, whereas payments will likely decline for those now receiving payments above the average. In our July 2000 report, we recommended that HCFA modify the payment adjuster for rural transports to ensure that it is structured to address the high fixed costs of low-volume providers in isolated areas, as these providers’ services are essential to ensuring Medicare beneficiaries’ access to ambulance services. HCFA agreed to work with the ambulance industry to identify and collect relevant data so that appropriate adjustments can be made in the future.
The Balanced Budget Act of 1997 required Medicare to change its payment system for ambulance services. In response, the Health Care Financing Administration (HCFA), now called the Centers for Medicare and Medicaid Services (CMS), proposed a fee schedule to standardize payments across provider types on the basis of national rates for particular services. Under the act, the fee schedule was to have applied to ambulance services furnished on or after January 1, 2000. HCFA published a proposed rule in September 2000 and has received public comment, but it has not yet issued a final rule. This testimony discusses the unique concerns of rural ambulance providers and the likely effects of the proposed fee schedule on these providers. Many rural ambulance providers face a set of unique challenges in implementing an appropriate payment policy. Rural providers--particularly those serving large geographic areas with low population density--tend to have high per-trip costs compared with urban and suburban providers. The proposed Medicare fee schedule does not sufficiently distinguish the providers serving beneficiaries in the most isolated rural areas and may not appropriately account for the higher costs of low-volume providers.
The Security Agreement between the United States and the Government of Iraq clearly states the objectives for the drawdown from Iraq, and DOD has further defined the conditions necessary to achieve these objectives. Time lines for the drawdown were established by the Security Agreement and further defined by the President of the United States. The Security Agreement provides that all U.S. forces, a term that includes personnel and equipment, shall withdraw from Iraqi territory no later than December 31, 2011. In addition, the U.S. government must transition all remaining bases where it maintains a presence to the Government of Iraq upon withdrawal. In regards to the retrograde of equipment and base transitions, the high-level conditions DOD has identified as important to the achievement of these objectives include the orderly and efficient movement or transfer, as appropriate, of equipment out of Iraq by the time lines established by the Security Agreement. Further conditions include the establishment of a mission capable civilian-led presence in Iraq by October 1, 2011, which is necessary to enable DOD to focus on achieving the redeployment of personnel, retrograde of equipment, and base transition goals by the end of the year. DOD anticipates that after December 31, 2011, all U.S. personnel remaining in Iraq, including DOD military personnel and civilians, will operate under the authority of the Chief of Mission for execution of security assistance activities. The United States government intends to stand up a regional diplomatic presence, a large-scale police training program, and an office of security cooperation (under the Chief of Mission’s authority) to continue training and equipping the Iraqi security forces. According to the State Department Iraq Transition Coordinator, as of June 2011, the plans for the U.S. government presence in Iraq after 2011 include about 16,000 personnel. This official stated that these personnel will perform a wide range of functions in addition to diplomacy and security assistance/cooperation, with the majority of personnel likely comprised of contractor personnel responsible for security and life support (such as facility operation, food service, laundry, etc.). Besides meeting requirements for security and life support, other major aspects of the transition include acquiring the use of property through land use agreements, repurposing or constructing new facilities, and defining requirements for and implementing solutions in the areas of logistics, aviation, equipment, information technology, and contracting/contract oversight. The logistics infrastructure supporting the redeployment and retrograde effort in the Iraqi theater of operations is large and complex, consisting of military organizations operating in both Iraq and Kuwait. It is through Kuwait’s three seaports and two airports that the majority of U.S. forces and all of DOD’s sensitive equipment, such as combat vehicles, flow from the theater of operations. DOD also uses commercial shipping firms to retrograde units’ nonsensitive material and equipment, such as individual equipment and spare parts, through ports in Jordan and Iraq, and uses an airport in Iraq in addition to airports in Kuwait to facilitate the redeployment of military personnel. Myriad logistics organizations in both Iraq and Kuwait support these operations, including elements of U.S. Central Command (CENTCOM), USF-I, U.S. Army Central (ARCENT), U.S. Transportation Command, U.S. Special Operations Command, the Defense Logistics Agency, the 1st Theater Sustainment Command, Army Materiel Command, and U.S. Air Forces Central Command. Many of these organizations have command relationships with each other, and their activities are synchronized through the issuance of written orders that define each organization’s drawdown tasks, among many other things. In the case of the drawdown from Iraq, such orders and associated activities comprise DOD’s plans. U.S. forces in Iraq rely on contractor personnel to provide a wide range of services including managing dining facilities, repairing military vehicles, providing trucks and drivers for transporting supplies, and maintaining airfields. Military units, such as the “mayors” who oversee base operations, communicate their needs for contracted services to the appropriate contracting personnel, who in turn seek to fulfill these “requirements” through contracting vehicles such as orders, modifications, or new contracts. According to DOD data, as of May 30, 2011, there were approximately 61,000 contractor personnel in Iraq. Approximately 52 percent of these contractor personnel are working under the Logistics Civil Augmentation Program (LOGCAP), the largest single contract supporting operations in Iraq and Kuwait. The day-to-day activities of LOGCAP contractor personnel in Iraq are overseen by contracting officers’ representatives (COR) managed by the Defense Contract Management Agency (DCMA), which administers the contract in Iraq on behalf of the LOGCAP Program Office, U.S. Army. The remainder of the contractor personnel primarily work under contracts awarded by CENTCOM-Joint Theater Support Contracting Command and perform a range of services. Although contracting officers are responsible for providing contract oversight, day-to-day oversight of contractors is generally the responsibility of CORs, who ensure that the government receives the agreed-upon services at the agreed-upon quality, avoids poor outcomes, and minimizes fraudulent practices. CORs typically come from military units and perform their duties as an added responsibility. GAO has issued several reports over the past 3 years addressing the drawdown of forces and equipment from Iraq. In September 2008, we reported on the progress of drawdown planning, and concluded that DOD had not adequately defined roles and responsibilities for executing the drawdown, resulting in multiple teams engaged in retrograde operations without a unified or coordinated chain of command. We recommended that the Secretary of Defense, in consultation with CENTCOM and the military departments, take steps to clarify the chain of command over logistical operations in support of the retrograde effort. Since that time, a number of DOD organizations have issued plans outlining a phased drawdown from Iraq that meet time frames set forth in the Security Agreement and presidential guidance while being responsive to security conditions on the ground. Furthermore, partially in response to our recommendation, DOD has created several organizations to achieve unity of effort over retrograde operations. After the publication of our September 2008 report, we continued to monitor DOD’s progress in planning for and executing the drawdown. In November 2009, we testified before the Commission on Wartime Contracting in Iraq and Afghanistan outlining several unresolved issues that had the potential to impede the effective execution of the drawdown. Following that testimony, we issued a report in April 2010 that went into greater detail on the progress of the drawdown and identified challenges that could impact its efficient execution. We recommended that the Secretary of Defense direct the appropriate authorities to take action in regards to planning for achieving unity of effort in operational contract support, mitigating the risks of contract transitions and insufficient contract oversight personnel, and clarifying the capacity of Kuwait as a temporary staging location for equipment. DOD concurred with all of our recommendations and stated that it is taking steps to address each one. For example, since our April 2010 report, DOD conducted an analysis of the benefits and costs of a prior planned transition to a new LOGCAP contract and decided not to make the transition based on its findings. DOD has robust plans and processes for determining the sequence of actions and associated resources necessary to achieve its objectives for the drawdown from Iraq. The current phase of the drawdown is well under way with a significant amount of equipment removed from Iraq and bases transitioned, among other things. Further, DOD successfully completed the previous drawdown phase, demonstrating the ability to plan and execute complex drawdown operations. However, several factors, including limited operational flexibility and the need to move a greater amount of equipment and close the largest bases with fewer available resources create a set of challenges and risks greater than what DOD faced during the prior drawdown phase. DOD’s existing plans and processes create flexibility and mitigate risk, but DOD continues to face challenges maintaining real-time visibility over some equipment and tracking unaccounted for equipment remaining after bases undergo the transition process. The completion of the prior drawdown phase, conducted between June and August 2010, demonstrated DOD’s ability to plan and execute complex drawdown operations. Several contributing factors enabled the successful reduction of military forces to 50,000 in accordance with the August 31, 2010 time line and removal of non-mission-essential equipment from Iraq.  Use of modeling tools and metrics. The models and projections run by the Army’s Responsible Reset Task Force, ARCENT Comptroller staff, and the CENTCOM Deployment Distribution Operations Center helped to more accurately predict the personnel and cargo flows out of Iraq, enabling the positioning of necessary resources and as a whole ensuring that sufficient capacity was in place to meet logistics requirements. Based on the known amount of equipment in Iraq, USF- I, in conjunction with other DOD organizations, set monthly targets for the reduction of rolling and containerized nonrolling stock, and DOD organizations in Kuwait created and refined a set of tools to track the activities conducted to meet these targets and provide the visibility necessary to make adjustments. For example, Army field support brigade and Responsible Reset Task Force personnel worked together to refine the flow chart used to track the movement of equipment through the critical nodes associated with the retrograde of equipment through Kuwait, such as wash racks, that could become limiting factors if stressed beyond capacity.  Emphasis on end-to-end equipment movements. DOD took steps to ensure that non-mission-essential equipment removed from Iraq to Kuwait received rapid disposition. When we visited Kuwait soon after the completion of this prior phase, the equipment lots were orderly and largely empty because equipment had been shipped to its final destination, such as Afghanistan or the United States, with the exception of the lot dedicated to the storage of Mine Resistant Ambush Protected vehicles. Further, ARCENT was actively reducing the backlog of containers at the lot reserved for unserviceable equipment unloading and sorting. Further, by the time of our visit in March 2011, DOD had resolved the problems that had resulted in nearly 60 frustrated containers languishing in one lot we found during our visit to Kuwait in September 2010. The frustration was primarily due to lack of customs documentation and poor container packing practices associated with a pilot program to send unserviceable equipment directly to a depot in the United States.  Employment of commercial shipping and alternative air ports for the removal of equipment and redeployment of personnel. DOD’s use of commercial “door-to-door” shipping through Jordan and, to a lesser extent, Iraq itself, for the majority of nonsensitive unit equipment, and the use of Al Asad Air Base in Iraq for unit redeployments directly to the United States successfully alleviated pressure on the Kuwait-based redeployment and retrograde infrastructure. For example, DOD officials we spoke with in September 2010 after the previous phase of the drawdown noted that approximately 30 percent of containerized cargo went through the Jordanian port of Aqaba, while 20 percent went through the Iraqi port of Umm Qasr.  Successful pilot of the partial self-redeployment concept. Partial self-redeployment of equipment and personnel consists of a military unit “road marching” from its location in Iraq to camps in Kuwait. During the road march, which is conducted as a military operation, the unit drives its own vehicles and provides for its own security, rather than scheduling movements for these vehicles via contracted transportation. As usual, the unit arranges for the shipment of its non- sensitive equipment via door-to-door moves through ports in Jordan and Iraq. DOD employed this concept with the 4th Stryker Brigade, 2nd Infantry Division, which departed Iraq in August 2010, just prior to the change of mission. According to DOD officials, partial self- redeployment reduces demand on critical transportation assets and will be employed during the current drawdown of forces. DOD has conducted robust planning for the sequence of actions necessary to achieve its objectives for the drawdown. As they have for prior drawdown phases, the major commands involved in conducting the drawdown have issued extensive written plans. In particular, USF-I issued its Operations Order (OPORD) 11-01 and ARCENT issued its supporting OPORD 11-01. These plans include many annexes, appendixes, and tabs that provide a high level of detail. For the first time USF-I’s operations order includes an annex W that addresses the operational contract support issues specific to the drawdown, such as contract descoping and contractor demobilization. Among many other things, these plans include detailed roles, responsibilities, and tasks for military units and logistics staffs that pertain to completing the retrograde and transfer of equipment and necessary base transitions by the established dates. For example, these plans and their supporting documentation set forth the order of base closures and time lines that must be met to achieve operational objectives. Other planning materials go into further details on the ways DOD plans to achieve its objectives for the drawdown. For example, USF-I’s “Base Closure Smart Book” provides a series of templates, instructions, and operating procedures that cover the entire base transition process. DOD continues to use the war-gaming process to further refine the sequence of drawdown actions and to identify and mitigate associated resource shortfalls. In particular, DOD employs “rehearsal of concept” drills, synchronization conferences, and focused “deep dive” analyses to round out its drawdown planning activities. For example, DOD has held several rehearsal of concept drills in Kuwait and Iraq that focus on the logistics aspects of the current drawdown phase, which are attended by senior leadership and planning officials from USF-I, ARCENT, other Army staff and components, as well as various elements within the Office of the Secretary of Defense, and State Department personnel, among others. During these conferences, attendees study all the steps the various commands will have to take to meet the drawdown objectives to reveal any outstanding issues and unmitigated risks and determine solutions. For example, during the ARCENT-hosted rehearsal of concept drill held in March 2011, participants analyzed the amount of equipment that will have to be moved every week between March and December 2011 and matched these requirements with available capacity. Such conferences provide a process by which planners are able to reschedule equipment movements to less demanding periods should requirements exceed available resources and capacity at a particular time and set the stage for ongoing monitoring of key indicators such as Redistribution Property Assistance Team (RPAT) capacity. Under the process, should key resources such as transportation assets still be deemed insufficient, participants can set decision points for acquiring additional capacity. In addition, participants can take steps to synchronize key activities, including ensuring that services like those provided by Defense Logistics Agency-Disposition Services, which conducts disposal, demilitarization, and re-utilization of unserviceable equipment, do not end before or while they are still needed to facilitate the drawdown. DOD has made substantial progress in executing the drawdown since our April 2010 report and the current phase of the drawdown is well under way. In terms of military personnel and contractors, 46,000 and 61,000 continue to conduct operations or work under DOD contracts out of pre- drawdown levels of 134,100 and 125,163 respectively, as of June 2011. In regards to equipment, as of May 2011 DOD had retrograded 2.36 million pieces since May 2009, or approximately 69 percent of the amount of equipment that was in Iraq in May 2009. Of the total number of bases, DOD had closed or transitioned 452, leaving 53. According to senior DOD officials, base transition activities are proceeding ahead of schedule and U.S. forces are proactively removing nonmission essential equipment and materiel such as excess ammunition, although the level of effort required to complete the transition of the remaining bases will be higher than it has been for the smaller bases that have closed to date. In addition to the retrograde of equipment, DOD continues to make progress in transferring equipment to the Government of Iraq, with over 38 percent of about 48,000 items of equipment provided to Iraq as of May 2011 under the United States Equipment Transfer to Iraq program. DOD intends to complete all of its planned transfers, excluding Foreign Excess Personal Property, by December 2011. For the category of non-excess equipment for which DOD obtained special statutory authority to transfer, on which we have previously reported, senior DOD officials state that the department has requested an extension of the relevant authority as part of its fiscal year 2012 legislative proposals, which they state will help ensure the completion of these transfers as planned. Figure 1 shows the personnel and equipment that has been retrograded during all prior drawdown phases, as well as what remains for DOD to redeploy, retrograde, or transfer, as appropriate, prior to December 31, 2011. Beyond the uncertain security environment and potential for increased violence as indicated earlier, which could affect DOD’s retrograde operations and base transitions, DOD will face greater risks and challenges to its ability to complete the current drawdown phase than it faced earlier at least in part due to three primary factors:  DOD will have less operational flexibility. Like the prior drawdown phase, the current phase will peak during the final months before DOD intends to achieve its operational objectives. During the prior drawdown phase, DOD set monthly equipment retrograde targets to achieve a notional goal for amount of equipment remaining in Iraq by August 31, 2010, but had the ability to address any unanticipated requirements after that date. However, in this final phase, DOD must now achieve its equipment retrograde goals by a specific date and, as a result, cannot leave United States forces’ equipment in Iraq to be dealt with after December 31, 2011. DOD therefore lacks the flexibility it was able to draw upon in retrograding equipment during the prior drawdown phase in case unexpected challenges arise.  Equipment retrograde and base transition requirements are greater than during prior drawdown phases. DOD will need to move and transfer a larger amount of equipment during the current phase of the drawdown than in the prior drawdown phase. For example, the unit responsible for processing theater-provided equipment for retrograde estimated that it will have to process an amount of this equipment four times greater than the amount associated with the prior drawdown phase. Further, DOD has yet to complete the transition of any of its large bases. Of the 53 bases remaining to be transferred in Iraq, 11 are considered large bases. All of these transitions are projected to occur prior to December 31, 2011, after which the current Security Agreement ends. According to DOD officials, each of these remaining base transitions will be more complex, time consuming, and likely ripe for unanticipated challenges than such transitions have been to date due to the scope of activities necessary to complete the transitions.  DOD will have fewer available resources. DOD’s infrastructure in Iraq that supports its equipment retrograde and base transition efforts, such as materiel handling equipment and military personnel, will simultaneously decrease as USF-I exits Iraq. Base-level personnel with whom we met expressed serious concerns with the sufficiency of military, civilian, and contractor personnel to set the conditions for transitioning the base according to the schedules required by USF-I’s plan. For example, officials were concerned that as living standards decrease on bases in Iraq and new job opportunities open elsewhere, contractors will be unable to remain fully staffed and thus less likely to complete their work and demobilize by the required date. In addition, DOD officials cite the collapsing support infrastructure in Iraq as a challenge for the current phase, noting concerns regarding the availability of key transportation resources, such as aviation assets, flatbed trucks, and heavy equipment transporters. Because DOD has fewer resources with which to meet a higher level of requirements amidst less operational flexibility, existing challenges associated with unanticipated requirements may be magnified. However, according to DOD officials, flexibility inherent to the plans and planning processes discussed earlier in this report mitigate the lack of operational flexibility and challenges inherent to doing more with less. For example, according to these officials, written modifications to plans through fragmentary orders and an adjustable requirements projection process allow for continual updates and adjustments necessary as conditions change. In addition, USF-I officials cite further risk mitigation built into current planning, such as 30 days of additional time added to each of the remaining bases’ transition schedules to account for unanticipated delays. In addition, senior DOD officials cite as risk mitigation the raising of the dollar value limit, from $15 million to $30 million per installation, of certain equipment that can be transferred to the Government of Iraq as Foreign Excess Personal Property in conjunction with a base closure or return, in accordance with DOD’s prioritized excess equipment disposition process. In these ways, DOD accounts for the fluid nature of the operational environment and unforeseen operational requirements associated with the current drawdown phase. Notably, however, last- minute adjustments, such as those made in response to initially unanticipated retrograde requirements and associated transportation needs, may increase costs since buying contracted transportation could be more expensive in the short-term. On the whole, DOD officials assert the department will meet its objectives for removing or transferring all equipment by December 31, 2011. DOD also has been responsive to risks identified via our continued oversight. For example, during the course of our work, we found that Army guidance did not make clear whether units can turn unserviceable equipment in to RPAT yards as opposed to Defense Logistics Agency- Disposition Services sites. Because redeploying units are typically very busy, especially if they are leaving a transitioning base, we found that they were turning such equipment in to RPAT yards because it is more convenient, according to RPAT officials. However, officials noted that because units sometimes turn in such equipment without paperwork and have even removed identifying markings such as serial numbers to avoid retribution, determining disposition for these items has been a time consuming and unanticipated challenge for the RPAT yards. In response to our findings, the Army rapidly issued guidance to clarify and reinforce the equipment disposition processes for the drawdown from Iraq, including the turn-in of unserviceable equipment. In addition, according to the Defense Logistics Agency, Expeditionary Disposal Remediation Teams were established in April 2011 and started traveling with RPAT teams to process unserviceable assets and train the Army on filling out paperwork for unserviceable turn-ins. In regards to containers, which is a category of equipment for which we have previously reported DOD lacked full visibility, USF-I reports that a recent audit in Iraq found that the container system of record was significantly more accurate than previously reported to us. Given the reasons for the poor initial accuracy, including lack of discipline in recording containers’ status as they changed locations, the challenge for USF-I will be to maintain this level of accuracy as the pace of the drawdown increases. DOD has taken numerous and robust actions to mitigate the risk to completing an efficient and orderly drawdown of forces, but continues to lack real time visibility over contractor-managed, government-owned (CMGO) equipment and does not collect complete data on the amount of previously unaccounted-for equipment being found as bases transition, which may increase the likelihood that unanticipated requirements for retrograding or transferring this equipment will emerge. Joint doctrine cites the importance of joint logistics environmentwide visibility over logistics resources (including equipment), describing that visibility as a desired attribute of logistics information systems, in part, because it provides the knowledge necessary to make effective decisions. In this vein, DOD drawdown-related orders highlight such visibility as a priority for effectively and efficiently achieving drawdown objectives. For example, one drawdown order identifies the maintenance of asset visibility as a key task to ensure accountability and to help reduce cases of fraud, waste, and abuse. As we previously reported, over time DOD has improved accountability and visibility for much of its equipment in Iraq but, as of April 2010, continued to face challenges with CMGO equipment. Specifically, officials responsible for property accountability cited the Federal Acquisition Regulation (FAR) requirement that contractors track equipment through their own systems as a limiting factor to these officials’ ability to maintain real-time visibility. Because these systems are not linked to government systems, government personnel have been required to periodically request contractor-tracked information and rely on regular government-conducted physical inventories to ensure accurate visibility, which limits such visibility to points in time. Subsequent to our April 2010 review, Headquarters, Department of the Army, Logistics continued to raise this as a challenge from a drawdown planning and execution perspective. However, according to officials in the Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics responsible for DOD equipment accountability policy, allowing contractors to track CMGO equipment using government systems as well as their own systems would remove critical checks and balances, thus heightening the potential for fraud, and a DOD memorandum suggests that the establishment of separate accountable property records by DOD components for contractor-acquired property could increase the likelihood of double-counting. As a result, CMGO equipment can still only be tracked in real time by government personnel, such as those responsible for executing the drawdown, after the equipment has been “delivered” to the government, which often may not occur until contract performance ends. Therefore, real-time visibility over this category of equipment during the drawdown remains an issue. For example, USF-I estimated that its confidence in its total equipment visibility was only 80 percent as of June 2011, primarily due to shortfalls in its visibility over CMGO equipment, according to DOD officials. According to Army data, such equipment comprises over a third of the Army equipment remaining in Iraq. To facilitate the drawdown, DOD has taken near-term actions to mitigate the lack of real-time visibility over CGMO equipment and improve the management of this property. First, USF-I coordinated with contractors to conduct full property inventories and submit a property re-allocation plan at least 120 days prior to the end of the contract performance period. According to senior DOD officials, all contractors overseen by DCMA have submitted these plans. These officials stated that the plans provide a starting point inventory by location and contract of all CMGO property and, according to DOD, illustrate DOD’s ongoing efforts to address CMGO issues. However, the information on equipment provided by the re-allocation plans still represents a “point in time” and does not provide real-time visibility while the assets are re-allocated. Similarly, while senior DOD officials expect that the results of the latest USF-I-performed wall-to- wall property inventory, scheduled to complete by the end of June 2011, will increase the level of confidence in CMGO visibility beyond the current 80 percent primarily by ensuring that all similar items, such as fire trucks, are consistently recorded, such visibility will only be an accurate snapshot as of that date—before much of this equipment will be leaving Iraq. Second, USF-I’s Contracting Fusion Cell, which was established in March 2011 to centralize the reporting of contractor demobilization milestones from all bases within Iraq, manages a new database that tracks contractor personnel and equipment. According to USF-I officials, the intent is for the database to provide real-time data so that USF-I can track over time how much CMGO equipment needs to leave Iraq. However, DOD officials have expressed concern that the new database faces similar data reliability and completeness challenges as other systems being used in Iraq to track contractor information face, as discussed in more detail later in this report. DOD’s continued need to rely on the results of physical inventories to obtain accurate planning data may increase the likelihood that unanticipated requirements associated with the retrograde or transfer of CMGO equipment will emerge. In particular, as the CMGO equipment re- allocation, transfer, and retrograde processes continue, previously unaccounted-for property may be brought to record in a contractor’s accountability system—yet remain invisible to the government unless it conducts further inventories. According to a senior DOD official, officials in Iraq recently discovered that one contractor had been using 200 CMGO trucks it had obtained from another contractor, yet had never transferred these vehicles to its own property record. Because these trucks were not on the contractor’s list of equipment, they had not been included in prior inventories. As a result, these trucks were not factored into DOD’s drawdown plans until they were properly added to the contractor’s equipment tracking system and checked by USF-I. According to DOD officials, USF-I is developing a standard operating procedure to address abandoned property that contractors might leave behind and decrease the time to obtain disposition instructions for such property from months to days, which may help mitigate the risk posed by unanticipated requirements. Nevertheless, as the number of forces in Iraq continues to decline, USF-I’s ability to conduct regular equipment inventories may become more limited, and, as a result, this kind of property may not become visible to drawdown planners until late in the drawdown process. Senior Army officials responsible for property accountability expressed concerns that CMGO equipment that contractors may deliver to the government and abandoned contractor equipment will comprise the greatest proportion of unaccounted equipment DOD will need to rapidly address during the drawdown, likely at the last minute. Some common CMGO items, such as materiel handling equipment, are expensive, in high demand in Afghanistan, and take a relatively large amount of resources, such as transportation assets, to move. DOD officials acknowledge that accountability and visibility of CMGO equipment needs to be re-examined and have noted that additional steps, likely in the form of policy and training, will be required. Without developing a means to achieve and maintain real-time visibility over critical CMGO property that retains the important checks and balances inherent to DOD’s current accountability processes, DOD will continue to face challenges ensuring the efficient retrograde and transfer of such property as it completes the drawdown in Iraq and begins the drawdown in Afghanistan. The transition of large bases in Iraq will likely exacerbate the challenges posed by the lack of real time visibility over CMGO property. In particular, DOD officials in Iraq remain concerned that the total amount of previously unaccounted-for equipment that DOD will need to address will likely increase. For example, after the completion of one of the largest base transitions to date, USF-I officials said that they were surprised at the amount of unaccounted-for equipment that was left over at the end of the transition process. Beyond CMGO equipment, Army data demonstrates that the increase over the past 2 years apparent in “found-on- installation” equipment rates is at least partially attributable to base closures in Iraq, but other factors, including the implementation of the Army’s Property Accountability Campaign, have also likely contributed, according to Army officials. Although Army officials view this increase positively because the Army can now account for this equipment, they also told us that Army-tracked found-on-installation data cannot be used as the sole indicator for leftover unaccounted-for equipment because such property may also represent equipment that was not properly entered into the Army’s property accountability system of record due to a lack of proper accompanying documentation. According to Army officials, USF-I has in the past tracked the amount of unaccounted-for equipment that was found remaining on bases that closed. For example, these officials previously identified such equipment as amounting to between 3 percent and 5 percent of all equipment on a base. However, based on their communication with USF-I, these officials now say that USF-I no longer tracks these data. As a result, DOD drawdown planners may lack an accurate planning factor for unaccounted-for government equipment and abandoned contractor equipment left over after the remaining bases in Iraq transition. Without continuing to track these data, DOD may therefore miss an opportunity to enhance the fidelity of its drawdown projections and improve its processes to reduce the amount of such property. DOD has taken action to improve its management of contracts in Iraq, such as enhancing contract oversight through command emphasis and assigning COR responsibilities as a primary duty in certain instances. However, other concerns, such as lack of experience among contract oversight personnel, remain. As the drawdown progresses, DOD may face further challenges in ensuring that major contracts transition without gaps in key services, and in effectively implementing its guidance for descoping contracts and demobilizing contractor personnel and infrastructure. Specific challenges for DOD include providing certain information, such as base closure dates, to contractors, obtaining information from contractors such as accurate personnel headcounts, and ensuring sufficient resources to facilitate full contractor demobilization. DOD has taken steps to address several of our findings related to issues affecting contract management for the drawdown. For example, we reported in April 2010 that USF-I guidance may not allow sufficient time for all contracted services needed during the drawdown to be put on contract in a responsible manner, which could lead to potential waste and service delays. Specifically, we found that standard operating procedures for requirements validation in Iraq only stated that personnel should submit requirements for contracted services at least 90 days prior to the date that funding is needed. However, this may not allow for sufficient time to obtain new contracted services and could lead to inefficient contracting practices. In March 2011, USF-I revised its financial management guidance to clarify time lines for submitting packages to the command’s requirement validation process. Specifically, the guidance informs units that, for requirements over a certain dollar threshold, they should consider the time it could take to obtain bids for new contracts, mobilize contractors, and perform other tasks associated with validating requirements, and adjust their submittal plans to USF-I accordingly, potentially 150 to 180 days before the start of the contract’s period of performance. In addition, USF-I issued an order that informed units to submit requirements to the Contract Review Board at least 90 days prior to the end of the contract’s period of performance for units with existing contract options or 120 to 135 days prior to the start of the period of performance for new contracts. Further, by requiring paperwork for late submissions explaining failure to comply, the order provides an additional incentive for units to submit their requirements for contracted services within the specified time frames. As a result, DOD has taken steps that could reduce the risks of poor outcomes that may follow from a lack of timely planning for contracted services, such as undefinitized contract actions, increased costs, lengthened schedules, underperformance, and service delays. In addition, we reported in April 2010 that USF-I’s predecessor, Multinational Force-Iraq, had in its drawdown plans delegated the responsibility for determining contract support requirements to contracting agencies, such as Joint Contracting Command-Iraq/Afghanistan (CENTCOM-Joint Theater Support Contracting Command’s predecessor), rather than to operational personnel such as combat force commanders, base commanders, and logistics personnel, among others. Further, we reported that, in accordance with joint doctrine and Army guidance, when planning for contractor support, planners must be aware of the operational principle of centralized contracting management to achieve unity of effort. We reported that centralized management can be achieved through means intended to synchronize and coordinate all contracting support actions being planned and executed in the operational area. USF-I has taken steps to ensure inclusion and coordination in determining contract support requirements for contract descoping and contractor demobilization between contracting support organizations and operational units. For example, USF-I, in preparation for the drawdown, issued an order requiring the senior tactical commander at each base to control and manage the accountability and drawdown of contracted support on their base. The order requires that these commanders, in conjunction with requiring activities and in coordination with contracting organizations, identify every service contract, task order, or service function operating within their base and determine a cessation date for each service and establish demobilization milestones. In a different order, USF-I instructed units to work with contracting organizations to identify and eliminate duplicate contracted services and to work with the Regional Contracting Center chief and other contract support organizations to determine the best contracting approach going forward. Such steps may help DOD improve its unity of effort in contract management as the drawdown progresses and ultimately concludes. DOD has also taken steps to improve contract oversight for the drawdown. For example, DOD has taken some steps to provide a sufficient number of trained contract oversight personnel to oversee contracts supporting the drawdown. We previously reported that DOD has had difficulties providing enough contract oversight personnel to deployed locations and training military personnel on how to work effectively with contractors in operations. In Iraq, we spoke with contracting officials from CENTCOM-Joint Theater Support Contracting Command and several Regional Contracting Centers, as well as officials from DCMA, LOGCAP, and the Air Force Contract Augmentation Program, and none reported experiencing contract oversight personnel shortfalls. DCMA employs a risk-based approach to contract oversight, allocating oversight personnel, such as CORs, and more frequent audits for contracts depending on the risk of mission failure and contractor problems. For example, according to DOD officials, DCMA has required monthly audits and assigned oversight personnel to contracts deemed medium to high risk, and depending on the contract, may conduct an audit every other month for those deemed low risk. Further, according to senior contracting officials, USF-I has taken steps to ensure that commanders and other senior leaders within the chain of command understand the importance of having CORs available and sufficiently trained to provide oversight during the drawdown. Several contracting officials said that they have seen an overall improvement in the following areas:  Assignment of oversight functions as a primary duty: According to contracting officials in Iraq, many units recognize the need to have CORs perform their oversight duties in a full-time capacity. For instance, contracting officers responsible for contracts at Victory Base Complex and Joint Base Balad, a major air base north of Baghdad, said that units have CORs who work full time on overseeing contracts, such as the contract to provide bottled water to U.S. bases in Iraq. We also met with CORs from Air Force and Army units who stated that their primary roles were to provide contract oversight.  Command emphasis on oversight: Several contracting officials attributed improvements in contract oversight to efforts by senior leaders to place a greater focus on issues involving operational contract support. For example, in October 2010, the USF-I Commanding General issued a memorandum describing the importance of the COR’s oversight function and the need to ensure that CORs have the necessary training, time, and experience to perform their duties, citing our prior work. Improved training: CENTCOM-Joint Theater Support Contracting Command (through Regional Contracting Centers) has held regularly scheduled training in Iraq and Kuwait to ensure that CORs and other contracting personnel have the training and certification necessary to perform their contract-related responsibilities. Several CORs told us that they received a combination of classroom and online instruction, while others only received online instruction. However, several CORs told us that they did not find the online instruction to be effective in preparing them to perform their oversight responsibilities. Some were also provided training before they deployed to Iraq. Senior contracting officials said that they have a surplus of personnel trained as CORs in Kuwait and Iraq in case additional oversight personnel are necessary.  Contractor demobilization preparation: In February 2011, the Regional Contracting Centers began holding demobilization orientations, developed by CENTCOM-Joint Theater Support Contracting Command in conjunction with DCMA, during which contract oversight personnel can discuss issues affecting contract demobilization, such as the need to obtain decisions from commanders on which contracts to descope and when to conduct such actions. Nevertheless, DOD continues to experience some challenges ensuring full contract oversight. Army guidance states that CORs usually serve in their position as an extra duty, depending upon the circumstances, and senior DOD officials told us that assigning COR responsibilities as an extra duty is desirable because the government can take advantage of the individual COR’s expertise associated with his or her primary duties. However, Army guidance also recognizes that it is a key duty that cannot be ignored without creating risk to the government. In addition, USF-I’s drawdown guidance states that units should make every effort to ensure that contracts considered critical to their mission, or contracts with exceptionally large footprints, have dedicated COR oversight and, accordingly, requires units to provide full-time COR support for such contracts. In Iraq during the drawdown, contract oversight has been hindered in at least some instances in which CORs’ primary duties have limited their ability to concentrate fully on their contract oversight duties. For example, contractors have reported to contracting officials instances in which CORs were not available on site during some of the previous base closures, and their absence hindered the resolution of certain contractor demobilization issues. According to an October 2010 Center for Army Lessons Learned document, the quality of inputs from CORs declined during the previous drawdown as CORs refocused on their primary duties. However, senior DOD officials noted that the other duties CORs typically perform, such as force protection, may at times trump their COR duties. As the drawdown progresses, units may encounter challenges when transitioning one contract to another. We have previously reported on contract transition issues as challenges, and one of the major lessons learned from the prior drawdown phase is the need to synchronize such transitions with ongoing operations to mitigate the risk of service disruption. In 2010 an Army battalion stationed in Kuwait, responsible for providing theater sustainment-level maintenance, experienced a labor strike, service disruptions, accidents that resulted in deaths, and other challenges that unit leadership attributed in large part to the transition of a major maintenance contract. Also contributing to these challenges was the intensity of operational activities at the time, which included the peak of efforts needed to complete the prior drawdown phase, the build-up of forces in Afghanistan, and the reconstitution of the Army’s prepositioned equipment in Kuwait. These challenges added to the unit missing some required delivery dates for equipment intended for use in Afghanistan. The extent to which the unit meets required delivery dates is a key measure of mission success, according to unit personnel. During our March 2011 visit, several senior military officials in Kuwait expressed concerns with the transition of the major line haul (trucking) contract in Kuwait. According to these officials and DOD data, this contract, which is critical for transporting equipment between Iraq and Kuwait, is expected to complete its transition during a period of heightened operational activity. The LOGCAP transition in Iraq will also be challenging. In April 2010 we recommended that DOD analyze the benefits, costs, and risks of transitioning from LOGCAP III to LOGCAP IV and other service contracts in Iraq to determine the most effective and efficient means for providing essential services during the drawdown, recognizing that the department was not required to make the transition. DOD concurred with our recommendation, conducted the analysis, and decided not to conduct the transition to LOGCAP IV. Unlike during the prior drawdown phase, however, DOD’s only option for maintaining LOGCAP services in Iraq after December 2011 is to transition to LOGCAP IV and DOD has approved an internal Action Memorandum to potentially allow State to use LOGCAP at its sites after 2011 as appropriate and feasible. Altogether, LOGCAP IV support is planned for 12 sites that are currently LOGCAP- supported and seven sites, including locations in Erbil and Basrah, that do not currently have LOGCAP services. After a projected task order award date of July 31, 2011, the transition will occur in two phases, with base and life support functions, such as dining facilities and laundry services, expected to transition first during a projected 100-day period, followed by transportation and materiel handling functions. The Army projects LOGCAP IV to have initial operating capability (base and life support) by October 1, 2011, and full operating capability by December 31, 2011. Although the circumstances are different, like we found in our April 2010 report, the transition will carry risks. For example, a base in Iraq is expected to lose its bulk fuel and airfield operations capabilities needed during the transition until the new LOGCAP services are in place due to the length of time needed to complete transition tasks. In addition, because of the amount of work necessary to prepare sites DOD and State anticipate to be used after December 31, 2011, the existing contractor risks not completing its construction projects before the transition, according to senior LOGCAP program management officials. The transition will be made even more complex by the need to maintain base life support and transportation services to within days of base closures, according to LOGCAP program management documentation. Transitioning the transportation component of LOGCAP will have its own unique challenges, including a complex and time-consuming property disposition process and uncertain requirements to support State. To mitigate such risks, LOGCAP program management is taking steps, such as working with CENTCOM-Joint Theater Support Contracting Command and the Contracting Fusion Cell to validate property and material requirements on a location-by-location basis, according to LOGCAP program officials. In addition, according to LOGCAP documentation and a senior DOD official, LOGCAP is projected to transition first at the seven post-2011 locations where its services are currently not provided to account for additional complexity associated with standing up LOGCAP at the new sites. Finally, according to DOD officials, contractual actions such as period of performance extensions, where feasible, may help mitigate any potential service gaps. To facilitate the drawdown, DOD has taken steps to plan contract “descoping,” which, for the purposes of this report, we define as a reduction in services commensurate with declining needs, and contractor demobilization, which, in the context of the drawdown, we define as the contractor’s actions to reduce and ultimately end its presence and footprint if not needed to support the U.S. government’s presence in Iraq after 2011. At the theater level, CENTCOM-Joint Theater Support Contracting Command, under the direction of USF-I, established the “Contracting Fusion Cell” in March 2011, and USF-I issued a fragmentary order directing the Cell to centralize the reporting of contractor demobilization milestones from all bases within Iraq; measure, assess, and report contractor demobilization milestones; and provide guidance and assistance to units, staff elements, and contracting activities as required. Since its establishment, the Contracting Fusion Cell has participated in a Rehearsal of Concept drill and a contracting summit to review and analyze issues affecting contractor demobilization. We attended the contracting summit and observed USF-I staff, units from across Iraq, and other stakeholders review major issues concerning contract requirements and demobilization for participating units and bases. As mentioned in the previous section, the Contracting Fusion Cell also employs a database in which division commanders input data on each of their active contracts, including counts of contractor personnel and equipment. Several senior military officials said that this database has been useful in providing data to plan the movements of personnel and equipment for the drawdown. However, some contracting officials noted that the same issues that have affected other efforts to capture accurate and reliable data on the contractor population in Iraq, such as the general lack of available data for personnel on firm fixed price contracts and challenges counting contractors that are on leave or out of the country on emergencies, are likely to affect the Contracting Fusion Cell’s database as well. DOD has also improved contractor demobilization planning based on lessons learned from the prior drawdown phase. According to an October 2010 Center for Army Lessons Learned document, one lesson learned from the Senior Contracting Official-Iraq was that contractors needed more guidance regarding closing contractor camps (referred to as “mancamps”) during the prior drawdown phase. This document stated that there were occasions when contractors left Iraq mancamps and associated facilities without proper close out, abandoned equipment, failed to repatriate personnel (especially third country nationals), failed to obtain proper Iraq exit visas, did not return government furnished equipment, did not close out in the appropriate contractor accountability system, and did not return badges. Since at least November 2010, CENTCOM has required all contracts and solicitations in Iraq to include a templated contractor demobilization clause that addresses the above- listed issues. CENTCOM-Joint Theater Support Contracting Command has also developed a template for CORs to ensure that contractor demobilization plans adhere to certain time frames. Moreover, USF-I has included in its guidance examples of cessation of services and contract demobilization schedules and a demobilization worksheet. However, according to senior contracting officials, there is no standard demobilization plan that contractors can submit. To address this shortfall, a senior contracting official stated in April 2011 that the office of the Senior Contracting Official-Iraq planned to develop a demobilization plan template for contractors. At the unit level, mayor cells are working with units, DOD contracting activities (such as Regional Contracting Centers, LOGCAP, and DCMA), and contractors performing work on their respective bases to identify and determine when certain contract requirements can be reduced and ultimately terminated. For example, the mayor cell for Joint Base Balad has established a set of milestones and time lines to descope contracts and demobilize contractors performing work on the base. One contract planned for descoping involves airfield sweepers. Joint Base Balad officials said that they plan on reducing the number of contracted airfield sweepers after the base’s fighter (F-16 squadron) mission ends and have also identified a date after which the services will no longer be needed. Additionally, senior officials in charge of Contingency Operating Base Marez, a U.S. base in northern Iraq, are planning to end their contract for security personnel to coincide with their base transition plans. The Contracting Fusion Cell, DCMA, and Regional Contracting Centers monitor the progress of contract descoping and demobilization through tools that track milestones and time lines for each of their respective contracts. For instance, these organizations are tracking the submission of contractor demobilization plans, which are required by a CENTCOM- Joint Theater Support Contracting Command clause. Units are taking further steps to ensure the continuity of key services while continuing to descope contracts. For example, as bases begin descoping contracts and demobilizing contractor personnel in preparation for base transition, some units are exploring the option of using local contractors to provide certain services. According to senior military officials, since local contractors do not require extensive base life support, such as housing, and will not have to be repatriated to their country of origin at the end of the contract, they can be employed to provide certain services that would otherwise have to be discontinued. However, we have previously reported on challenges hiring local national contractors, including the need for greater oversight due to Iraqi firms’ relative lack of experience, limited capacity and capability, unfamiliarity with U.S. quality standards and expectations, and lack of quality control processes that U.S. firms have in place. Some units also intend to replace contractor personnel with servicemembers to ensure continuity of certain services, such as guard security, airfield vegetation removal, and generator maintenance and are conducting “troop-to-task” analysis to determine which servicemembers will perform these tasks and how many will be needed. For example, the mayor cell at Joint Base Balad has developed plans to reduce contractor personnel for the base’s incinerator operations and eventually replace them with servicemembers. Officials from one mayor cell noted that these additional tasks may further tax unit personnel who are in short supply and busy meeting other priorities. Although major contractor demobilizations have yet to occur, early indications suggest that DOD faces several challenges as it implements its contractor drawdown guidance. DOD has guidance in place to facilitate the descoping of contract services and contractor demobilization. In particular, USF-I’s drawdown guidance states that contracting organizations in Iraq are to work with the requiring activities (typically military units) and base leadership to ensure all contracts and task orders are adequately scoped to meet mission requirements and are scheduled to cease or terminate when no longer required. It also provides time frames by which contractors must be notified to complete key tasks and cease providing services. However, without taking additional steps to address the challenges discussed below, DOD may be unable to effectively implement its guidance and ensure the effective reduction of contract services to appropriate levels and ultimate demobilization of all its contractors.  Providing information to contractors. Guidance in a USF-I fragmentary order requires senior tactical commanders at each base to notify all contractors with the base closure or transition date no later than 180 days prior to the base closure or transition so the contractors can start preparing their personnel and equipment for redeployment. However, LOGCAP program officials were unable to provide base transition dates to subcontractors because base closure dates and other information relevant to demobilization are classified, which limited the contractors’ ability to plan their demobilization tasks such as replacing third country national personnel with local national personnel to ensure continuity of service while downsizing their infrastructure. An annex to USF-I’s drawdown guidance also states that in most cases contractors must be notified in writing 45 to 120 days in advance of the service cessation date. Nevertheless, according to senior contracting officials, contractors have expressed concerns about the lack of clarity on when to reduce services and which contracted services will be needed as USF-I proceeds with the drawdown. According to senior contracting officials, some contractors reported instances in which they were notified only a few weeks in advance to transition to a new location, affecting their ability to plan. Fluid base transition dates may exacerbate this challenge. For example, according to a senior contracting official, the date for the transfer of a U.S. base to the Government of Iraq changed eight times within 3 weeks, which made it difficult to plan for the termination of contracts at the base and contractor demobilization.  Obtaining accurate and sufficient information from contractors. According to DOD officials, as part of demobilization planning, contractors submit property re-allocation plans that list property in-use and excess to the contractors’ needs as well as contractors’ plans for re-allocating the property, among other things. Contractors submit these plans in conjunction with joint government/contractor inventories conducted 120 days prior to base transition. However, according to several contracting officials, some contractors had provided mayor cells with draft or incomplete plans, some of which contained inaccurate information and incorrect assumptions, on how they intend to redistribute their property in preparation for base transitions. USF-I drawdown guidance also requires senior tactical commanders at every base in Iraq to account for all task orders, contracts, and service functions on their bases, to include contractor employee headcount data and report such information on a regular basis to the Contracting Fusion Cell. However, several base management officials told us that because they do not have direct contact with or visibility over subcontractors, they cannot ensure that contractor personnel are not being undercounted during contractor headcounts, which may hinder planning for the resources needed to complete contractor demobilization.  Sufficiency of resources to complete contractor demobilization. According to USF-I guidance, in addition to preparing a demobilization plan, key tasks that contractors need to perform to complete demobilization include participating in joint property inventories of CMGO property at least 120 days prior to base transition, as well as scheduling and coordinating transportation, among other things. In regards to coordinating transportation, USF-I is working to include contractor personnel requirements in its planning but, according to senior contracting officials, contractors have expressed concerns about the availability of resources to redeploy their personnel and move their equipment as the drawdown progresses. Contractors have also expressed concern about their ability to communicate with government personnel during demobilization, according to these officials. DOD and State interagency coordination for the transition began late, but both agencies have now coordinated extensively to plan for the transfer or loan to State of a wide range of DOD equipment, and DOD has taken steps to minimize any impact on unit readiness of such transfers. DOD also has approved an internal Action Memorandum to potentially allow State to use DOD contracts to obtain services such as base and life support, food and fuel, and maintenance, as appropriate and feasible within funding constraints, but agreements between State and DOD have not been finalized and State may not have sufficient funding or capacity to oversee these contracted services. Further, State is taking steps to replace services that DOD will no longer provide, but these services will be different because State’s mission in Iraq will be different than DOD’s mission. In terms of scope, DOD plans a robust post-2011 presence as part of an Office of Security Cooperation operating under Chief of Mission authority. However, the extent to which DOD’s personnel would receive status protections such as privileges and immunities and the limited nature of the anticipated engagement model with Iraq may not be fully understood throughout the department. In addition to redeploying its military personnel and retrograding or transferring its remaining equipment, during the drawdown DOD aims to facilitate the transition to a civilian-led presence in Iraq, and, to that end, has engaged in formal interagency coordination with State at various levels within the two departments. One of the principle objectives of this coordination has been to define State’s needs for external support and determine how DOD can best meet those needs. For example, DOD and State established the “Ad Hoc Senior Executive Steering Group on the DOD to State Transition” in September 2010 to assess State’s needs in the logistics and sustainment areas, define requirements, and manage solutions, in particular those anticipated to be provided by DOD. Co- chaired by the Deputy Assistant Secretary of Defense for Program Support and the Deputy Assistant Secretary of State for Logistics Management, this group meets biweekly. According to these two officials, the meetings greatly facilitated State’s ability to develop its requirements for DOD support, including equipment. In addition, both State and, according to DOD officials, DOD, have designated a senior-level official responsible for the transition. For example, the State Department Iraq Transition Coordinator coordinates State’s aspects of the transition from military to civilian operations in Iraq. On the ground in Iraq, multiple USF-I personnel, including planners and logisticians, are embedded as liaisons within Embassy Baghdad’s Management Cell for Transition, and interagency transition cells are in place at all sites that are anticipated to transition to State throughout Iraq. Finally, USF-I stood up separate working groups for transitioning operations and base-level sustainment, which include State participation. Coordination at these multiple levels helped facilitate, for example, the identification and planning for the 310 out of the more than 1,000 Joint USF-I / U.S. Embassy Baghdad Joint Campaign Plan-specified tasks DOD currently performs in Iraq that State anticipates assuming after the transition. The coordination outlined here occurred late in the process and the delays have made the transition more challenging than it otherwise could have been, compounding State’s relatively limited capacity to plan, as noted by senior DOD officials and acknowledged by senior State officials. As a result, for example, State’s Inspector General found that the initial lack of senior level DOD and State officials in Washington, D.C. dedicated to the Iraq transition process contributed to the inability of operational level DOD and State officials to obtain timely decisions on key transition issues. During our travel to Iraq, numerous officials at numerous levels cited the critical importance of planning early to minimize challenges in conducting future similar transitions, such as will be necessary in Afghanistan. DOD and State interagency coordination has facilitated the identification of State’s requirements for DOD equipment and identified efficient solutions to meet these needs. In an April 2010 memo to DOD, State presented its assessment that it lacked the resources and capability to provide technology, vehicles, and aircraft to adequately meet the extreme security challenges in Iraq. The justification for DOD equipment transfer accompanying the memo suggested that, without the transfer of DOD military equipment, the security of State personnel in Iraq would be degraded significantly and one could expect increased casualties. To that end, according to State officials, State initially requested about 23,000 individual pieces of equipment encompassing a wide range of items. To meet these needs, DOD established an “Equipping Board” with members from the Office of the Secretary of Defense, Joint Staff, and military services. According to Equipping Board participants, State’s initial request did not fully reflect the actual capabilities State needed. These officials said that DOD subject matter experts in areas such as medical and airfield logistics assisted State officials in defining State’s requirements in these areas, reducing the request to around 3,800 individual pieces of mostly standard military equipment worth approximately $209 million. In addition to cutting potential costs to State by reducing the overall number of items requested, the board also created efficiencies by, for example, substituting requests for expensive equipment such as new CT scan machines and night vision goggles for older versions already in Iraq that, while less capable, will nevertheless meet State’s needs, according to DOD officials. In addition to DOD military equipment, State has also expressed needs for nonstandard equipment in Iraq. Aside from 60 Mine Resistant Ambush Protected (MRAP) vehicles, this equipment includes mainly low-value items, such as containerized housing units, desk chairs and other office equipment, which USF-I plans to transfer after screening the items for USF-I, CENTCOM, and service requirements. In terms of the number of total items, the scope of non- standard equipment transfers is projected to be much larger than the transfer of standard DOD military equipment. DOD plans to provide military equipment to State through various means, and for non-excess equipment has taken steps to mitigate any impact on readiness. According to DOD documentation, 32 percent of the total State request will be comprised of excess defense articles provided at no cost, such as collapsible fabric fuel tanks, 7.5-ton cranes, and speakers; and about 6 percent will be items loaned, including the MRAPs and biometric equipment; and about 62 percent will be non-excess equipment provided to State through sales from stock, including items such as aircraft flares, radios, and medical equipment. According to DOD officials involved in the process, the non-excess equipment items for State were assigned a risk level to determine their potential impact on readiness if transferred. For example, 101 out of 185 medical item types were deemed to be at high risk of affecting readiness. According to DOD officials, for the high-risk items, State intends to pay full acquisition value to facilitate rapid replacement, versus the low-risk items, for which State plans to pay depreciated value. In addition, according to DOD officials, DOD has taken steps to accelerate the procurement of some of the high-risk items to be transferred to State. Finally, the MRAPs DOD intends to loan to State are coming out of requirements for Army Prepositioned Stocks rather than unit stocks. According to DOD, these factors will minimize any impact on unit readiness of transferring or loaning equipment to State. Remaining issues to be resolved include determining how to replace loaned equipment that is destroyed or severely damaged during the course of its use, since, according to DOD officials, State will likely have to request additional procurement funding if it determines that a replacement is necessary. In addition to equipment transfers and loans, through the interagency coordination process, DOD has approved an internal Action Memorandum to potentially support State’s post-2011 presence in Iraq by allowing State to use DOD contracts to obtain needed services as appropriate and feasible, but agreements between State and DOD have not been finalized. First, State anticipates obtaining base and life support such as dining facility and laundry operations through an order on the Army’s LOGCAP contract. The Army projects that between 4,500 and 5,500 contractor personnel will be necessary to provide these services to State. Second, State anticipates relying on a DOD contract to provide 100 DOD contractor personnel to maintain some of the equipment transferred and loaned by DOD, including major items such as vehicles, under a contract DOD already plans to have in place to support its own personnel in Iraq. Third, State anticipates obtaining food and fuel through Defense Logistics Agency contractors. Finally, DOD intends to provide various capabilities such as information technology support and the contracted capability to detect incoming rocket or mortar fire and provide warnings. According to DOD and State officials, using DOD’s existing contracting mechanisms for these services would be more efficient than if State were to award its own contracts. Documentation including DOD’s initial estimates valued the support requested by State at about $575 million per year, for which, under the proposed terms of a draft interagency agreement, State would reimburse DOD. However, DOD’s documentation raised concerns about State’s ability to fund these services, given the amounts designated for these purposes in State’s budget requests. According to State, the time frame for LOGCAP support is subject to negotiation with DOD, after which it may either award its own contract or use local supply options if conditions permit. According to State documentation, State currently faces shortfalls in personnel with sufficient experience and expertise to perform necessary contract oversight. As a result, State plans to use DOD support for certain contract management and oversight functions. In particular, the Defense Contract Management Agency (DCMA) and Defense Contra Audit Agency (DCAA) intend to provide contract pricing, administration, and audit services for the LOGCAP contract, and, according to DOD officials, Army Materiel Command has agreed to provide management functions for the maintenance contract. Projected requirements for these functions include 47 DCMA personnel supporting State operations, as well as 3 DCAA and 3 Army Materiel Command civilians. State wouldprovide CORs to oversee the DOD contractors. According to State, th e COR function is one that is normally part of the duties of a Foreign Service officer or specialist position at embassies abroad and CORs are identified as part of the normal assignment cycle. As of early July 201 State documentation identified 35 individuals to perform COR duties associated with 136 LOGCAP oversight areas across locations in Ira such as dining facilities operation and firefighting services. COR positions for 31 oversight areas remained to be filled, including air operations throughout Iraq. q, In addition to receiving contract support through DOD, in some cases State intends to directly contract for services that it currently receives through DOD, particularly in the medical, aviation, information technology, and security areas. For example, State recently awarded a contract that State documentation indicates will provide for seven health units, one large Diplomatic Support Hospital, and three small Diplomatic Support Hospitals in large part to replace medical services that DOD has provided to date in Iraq. In addition, State’s Bureau of Diplomatic Security will conduct static security activities at U.S. facilities with only a State presence remaining in Iraq past December 31, 2011. According to DOD and State officials, DOD, through CENTCOM, would be responsible for security on the Office of Security Cooperation-Iraq (OSC-I) sites under the proposed terms of a draft Memorandum of Understanding between DOD and State. According to testimony from the Under Secretary of State for Management before the Wartime Contracting Commission, static and movement security for State’s Embassy in Baghdad alone will cost nearly $2.5 billion over the next 5 years. Even with the increase in such capacity, the drawdown of military forces will result in lost protective security capabilities for State because State’s mission in Iraq is significantly different than DOD’s mission. As a result, State will rely to a greater extent on the Government of Iraq for certain types of security activities. For example, State will deploy a “sense and warn” platform that will allow for advance warning in case of incoming fire such as rockets and mortars, but will not include the capability to fire back at the attackers, as DOD currently fields at its bases—which will become an Iraqi responsibility. According to DOD and State officials, the scale of the combined DOD and State presence in Iraq after December 2011 will be unprecedented. A June 2011 DOD report to congressional committees projected nearly 20,000 DOD contractor personnel to be spread across all post-December 2011 sites in Iraq. However, DOD and State now expect this number to be lower, and state that current plans call for an estimated total number of U.S. government direct hires and contractors in the range of 16,000 to 17,000 personnel. As stated recently by a Department of State official before the House Armed Services Committee, about 14,000 of those personnel will likely be contractor personnel operating under both DOD and State. According to DOD and State, the expected number of personnel has changed from the earlier projection due to the fact that plans are continually being refined and because contracts have since been awarded. DOD and State expect that the exact number of personnel in Iraq after December 2011 will continue to change as contracts are put in place and requirements are further refined. In addition to providing contract support services to State as discussed earlier, DOD personnel intend to operate an Office of Security Cooperation-Iraq (OSC-I), which would be funded by both DOD and State. As of June 2011, DOD planning documents called for DOD personnel to remain at 10 sites countrywide. Six of these sites would be OSC-I only sites staffed by DOD personnel and contractors. DOD and State personnel, including those implementing the police training program under State’s Bureau of International Narcotics and Law Enforcement Affairs, would be colocated at the four remaining sites. DOD’s activities under OSC-I will include the fielding, administration, and oversight of an estimated 157 military or civilian personnel and Security Assistance Teams comprised of 763 military, civilian, or contractor personnel. According to a report from the State Department’s Office of Inspector General and senior DOD officials, OSC-I’s mission would include advising, training, and equipping Iraqi forces, supporting professional military education; planning joint military exercises; and managing foreign military sales programs involving $6.1 billion in Iraqi funds and $2 billion in U.S. funds through the Iraqi Security Forces Fund. Under this mission, DOD’s planned activities include Security Force Assistance, which is a new subset of security cooperation described in the 2010 Quadrennial Defense Review as encompassing activities to train, equip, advise, and assist host countries’ forces in becoming more proficient at providing security to their populations and protecting their resources and territories. DOD also intends to provide for the management, security, and sustainment of its sites and some construction DOD officials refer to as “site improvements” to enhance the sites’ suitability. According to senior DOD officials, with the exception of one site near the U.S. Embassy in Baghdad, the OSC-I presence in Iraq will not remain longer than 3 years. According to senior DOD officials, in the absence of an Iraqi request for an extended U.S. military presence, the U.S. government is not attempting to negotiate a Status of Forces Agreement with the Government of Iraq in regards to the post-December 2011 U.S. presence. Rather than negotiating a Status of Forces Agreement, DOD is preparing to stand up OSC-I, though it does not yet have final approval from the Government of Iraq to establish such a presence. According to State officials, this leaves the Strategic Framework Agreement as the overarching basis for OSC-I’s activities. Nevertheless, DOD is proceeding with preparations for the OSC-I sites, including construction, absent land use agreements with the Government of Iraq with the assumption that these agreements will be forthcoming. This carries some risk; for example, State officials noted that approximately $18 million was obligated to prepare an Embassy Branch Office in Mosul that was subsequently “indefinitely postponed” as an enduring site due in part to a lack of buy-in from the Iraqi government. According to State officials, while State is working to recoup some of those funds from the contractor, State officials stated that they expected to recoup only about $8 to $10 million, although the exact amount had not yet been determined. According to State documentation and senior State officials, as of June 2011, the Government of Iraq had not formally signed any agreements for the OSC-I-only sites. According to DOD and State officials, delays associated with forming a government after Iraq’s March 2010 parliamentary elections have hindered the negotiation of these agreements. In particular, Iraq continues to lack both a Minister of Defense and a Minister of Interior with whom to negotiate these agreements and others. The scope of DOD’s proposed mission in Iraq after 2011 and the extent to which DOD personnel conducting these activities will be ensured protections may not be not well understood throughout the department. According to senior DOD officials and State officials, without a request from the Government of Iraq for a follow-on U.S. military presence, all U.S. government activities in Iraq, including those performed by DOD military, civilian, and contractor personnel, will occur under Chief of Mission authority, as approved by the National Security Deputies Committee in May 2010. Additionally, according to senior DOD and State officials and DOD documentation, DOD and State anticipate that direct- hire, full-time DOD military and civilian personnel working under OSC-I can be accredited to the diplomatic mission as administrative and technical staff, with some status protections such as privileges and immunities provided under the Vienna Convention on Diplomatic Relations. Notwithstanding DOD’s intent to operate under Chief of Mission authority, a CENTCOM information paper dated February 2011, coordinated with DOD’s Office of the General Counsel, makes the assumption that, absent clarification from the Secretary of Defense, the 157 DOD personnel would operate under the direction of the CENTCOM commander, rather than the Chief of Mission. The information paper also raised some questions regarding the feasibility of notifying OSC-I personnel to the Government of Iraq as part of the administrative and technical staff. This apparent incongruity has contributed to a lack of understanding within the Department of the precise scope of DOD’s mission in post-2011 Iraq and the status protections that will be afforded to DOD personnel. For example, senior DOD officials stated that a variety of organizations within DOD continue to push for a role in post-2011 Iraq even though these organizations’ activities are not part of the anticipated engagement model based on Chief of Mission authority, which, according to those officials, could limit the range of activities DOD can perform in Iraq. Similarly, due to uncertainty regarding status protections, Army officials expressed concern that DOD would be unable to prevent one of its military or civilian personnel from languishing in an Iraqi jail if, for example, he or she were to be involved in an accident in which an Iraqi dies. Further, senior USF-I officials have expressed frustration with differing legal opinions on such issues. Without officially clarifying these issues or without a status of forces or other agreement that includes such details, DOD personnel may lack clarity as to the scope of DOD’s mission in Iraq after December 31, 2011, and the department may be less able to ensure unity of effort among its organizations and with State in completing the transition to a civilian-led presence in Iraq. DOD may therefore risk an uncoordinated approach in defining and implementing the range of activities its OSC-I personnel will perform. The drawdown of U.S. military forces and equipment from Iraq, an operation governed by the time line set forth in the Security Agreement, is an operation of unprecedented magnitude, and will occur amidst an uncertain political and security environment as well as the ongoing transition to a civilian-led U.S. government presence in Iraq. Much has been done to facilitate the drawdown. DOD has conducted detailed planning for the sequence of actions and associated resources necessary to mitigate risk and to achieve its goals of transferring and removing personnel and equipment from the remaining bases in Iraq. In addition, DOD has taken steps to improve its management and oversight of contracts in Iraq by issuing new guidance, developing metrics and milestones for tracking key dates and progress, establishing a cell to provide a common operating picture for all contracts in Iraq, and working to ensure a sufficient number of CORs are available to conduct oversight. To help facilitate the transition to a civilian-led presence in Iraq, DOD has engaged in interagency coordination with State at various levels, and both agencies are working closely to coordinate the provision of equipment and services needed to support the transition. However, without taking further action in regards to its visibility over CMGO equipment and in tracking equipment that is brought to record during the completion of base transitions, DOD may not be able to take advantage of further opportunities to reduce the likelihood of unanticipated requirements and to refine its drawdown projections. Further, challenges DOD faces in implementing its contractor demobilization guidance, including providing key information to contractors and ensuring robust contractor demobilization planning, may hinder the base transition process if contractors miss key dates or demobilize in a less than orderly fashion. Finally, DOD and State’s ability to ensure a timely, coordinated approach to defining and implementing OSC-I may suffer absent an official clarification on the scope of DOD’s activities in post-2011 Iraq in accordance with the anticipated engagement model and the extent to which all DOD government personnel will receive status protections such as privileges and immunities, since DOD may lack a status of forces or other agreement after December 31, 2011. We recommend that the Secretary of Defense take the following four actions. To help ensure that DOD will be able to complete the orderly and efficient retrograde and transfer of its equipment and transition of its bases in Iraq by minimizing unanticipated requirements,  direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in conjunction with the Secretary of the Army and the Commander, U.S. Central Command, to approve and implement, as appropriate, a process, to include associated policy and training, for acquiring and maintaining real-time visibility of CMGO equipment before it is delivered to the U.S. government that meets the needs of operational forces while retaining oversight features inherent to DOD’s current accountability processes; and  direct the Commander, U.S. Forces-Iraq take steps to collect accurate data on equipment that is found during the large base closure process but not recorded in any property book, and, as appropriate, refine the projection for equipment needing to be retrograded and transferred based on these data. To maximize its ability to achieve an orderly and efficient drawdown of contracted services in Iraq, direct the Commander, U.S. Forces-Iraq, to (1) assess the risk of providing all contractors, including their subcontractors, with the information—such as base transition dates— required to descope services and demobilize their workforces, against the risk of contractors’ inability to meet milestones without it and take the appropriate actions based on this assessment; (2) take appropriate measures, such as enforcement of guidance laid out in the template to be developed by the office of the Senior Contracting Official-Iraq, to ensure robust contractor planning associated with demobilization; and (3) engage contractors to ensure that total personnel headcounts accurately reflect all personnel, including those working under subcontracts. To ensure that the U.S. government activities in Iraq after December 2011 reflect the appropriate unity of effort and focus DOD and State’s efforts on implementing a coordinated approach to defining and implementing the activities to be undertaken by OSC-I, issue a memorandum clarifying the command structure of any DOD elements remaining in Iraq post-2011 and the scope of DOD activities authorized in post-2011 Iraq in accordance with an approved engagement model, including guidance regarding actions or decisions that will be taken in the event adequate privileges, exemptions, and immunities are not obtained for such DOD elements. In written comments on a draft of this report, DOD concurred with our four recommendations listed above, but asked that our last recommendation be reworded to clarify the timing of our recommendation. We agreed to modify the recommendation to specify that the guidance should be completed once the engagement model is finalized. The Department of State also provided a number of informal technical comments that we considered and incorporated, as appropriate. The Department of State did not provide formal written comments. In its comments regarding our first recommendation, DOD stated that it agrees that accountability of contractor-managed government-owned equipment is important. DOD further commented that USF-I has developed a Base Transition Smart Book that defines CMGO procedures and provides a series of templates, instructions, and operating procedures that cover the entire base transition process. While the Base Transition Smart Book may define CMGO procedures, as we note in our report, these procedures do not provide real-time visibility over this category of equipment and we continue to believe that DOD needs to develop a process which will allow real-time visibility of CMGO equipment before it is delivered to the U.S. government. Regarding our second recommendation, DOD commented that it agrees that the collection of accurate data of found equipment is necessary to refine projections for equipment retrograde, and noted that the Base Transition Smart Book provides guidance on how to manage found equipment and update projections for closure. However, as we note in our report, USF-I no longer tracks unaccounted-for equipment that was found remaining on bases that closed. As a result, DOD drawdown planners may lack an accurate planning factor for unaccounted-for government equipment and abandoned contractor equipment left over after the remaining bases in Iraq transition. Therefore we continue to believe that USF-I should take additional steps to collect data on equipment that is found during the base closure process, and use this data to refine the projection for equipment needing to be retrograded and transferred. In response to our third recommendation, DOD commented that it acknowledges the risks associated with providing any contractor critical transition information about base closures and timelines. DOD said that it will address this risk using a vigorous vetting process and security background checks. DOD also commented that it will make certain that demobilization planning captures the associated requirements concerning contractors and their materiel and it further noted that the accountability of all contractor personnel, both prime contractors and their subcontractors, will be maintained through continued Synchronized Predeployment Operational Tracker (SPOT) compliance and the periodic contractor census conducted under the purview of the Commander, U.S. Forces-Iraq. As we have noted in previous reports, however, agency-reported data in SPOT and the census should not be used to identify trends or draw conclusions about the number of contractor personnel due to limitations such as incomplete and inaccurate data. As a result, DOD cannot ensure that contractor personnel are not being undercounted during contractor headcounts, and we continue to believe that additional action to engage with contractors is necessary. Regarding our last recommendation, DOD concurred with the intent of our recommendation but asked that we modify the wording of the recommendation to clarify that the guidance should be developed after the engagement model has been finalized. We agree with DOD’s suggested change and therefore modified our recommendation accordingly. The department also provided an informal technical comment that we considered and incorporated, as appropriate. A complete copy of DOD’s written comments is included in appendix II. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of State; the Chairman of the Joint Chiefs of Staff; and the Secretary of the Army. This report also is available at no charge on our Web site at http://www.gao.gov. Should you or your staffs have any questions concerning this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. D.C.: Sep. 15, 2011). To determine the extent to which the Department of Defense (DOD) has planned for, begun to execute, and identified and mitigated risks associated with transferring and removing personnel and equipment from the bases remaining open past August 31, 2010, we reviewed and analyzed the major plans that guide the execution of the drawdown, including those published by U.S. Forces-Iraq (USF-I) and U.S. Army Central (ARCENT). We also reviewed other relevant documents, including command briefings, the Security Agreement between the United States and the Republic of Iraq, as well as DOD joint doctrine. Additionally, we obtained data and documentation and spoke with officials at many organizations and levels involved in the preparation and execution of drawdown plans to include: the Office of the Secretary of Defense, USF-I, and ARCENT. We also spoke with officials and obtained data and documentation from a range of supporting commands, including Headquarters, Department of the Army, Logistics; Army Materiel Command; Army Sustainment Command; Defense Logistics Agency; Surface Deployment and Distribution Command; CENTCOM Deployment Distribution Operations Center; CENTCOM-Joint Theater Support Contracting Command; Defense Contract Management Agency; Air Force Contract Augmentation Program; and the Logistics Civil Augmentation Program Office. In support of this effort, we traveled to Kuwait in September 2010 and March 2011. We also traveled to Iraq in April 2011. During these trips we spoke with officials, attended planning conferences, obtained data and documentation, and observed the processes instituted to facilitate the drawdown. We also traveled to Sierra Army Depot to observe the culmination of retrograde operations, as well as to U.S. Army Combined Arms Support Command to discuss the institutionalization of lessons learned from the drawdown. To address Department of State issues impacting the drawdown of forces from Iraq, we obtained documentation and spoke with officials at the U.S. Department of State as well as Embassy Baghdad. Throughout the engagement, the team relied upon staff working from our Baghdad Field Office to conduct interviews with officials in theater, attend planning conferences, and to periodically refresh key information. To determine the extent to which DOD has planned for, begun to execute, and identified and mitigated risks associated with curtailing unneeded contract services, transitioning expiring contracts, and providing adequate contract oversight, we reviewed contracting-specific planning documents, memoranda, and other sources of guidance issued by DOD and subordinate organizations. We also met with contracting officials in Kuwait and Iraq to discuss how military units in Iraq intended to terminate contracted services and demobilize the contractor workforce, while maintaining sufficient oversight on contracts supporting military operations in theater. In addition, we visited three military bases in Iraq and met with the mayor cells to obtain information on contract descoping and demobilization issues specific to those bases and the impact those issues have on the base transition process. We selected these locations because they are all large bases and because travel was possible during the time frame of our visit. We also met with contracting officers’ representatives (COR) from one base to discuss the challenges that they have encountered in the performance of their contract oversight duties. To supplement our analysis, we observed several contracted services, such as debris removal from Camp Victory and incinerator management at Joint Base Balad, and reviewed plans on how bases intended to end contracted services and demobilize the contractor work force in keeping with base transition plans. Further, we observed ARCENT and USF-I rehearsal of concept drills, a contracting summit organized by USF-I and CENTCOM-Joint Theater Support Contracting Command, and a demobilization orientation session to collect information on contracting issues relevant to the U.S. military withdrawal from Iraq and the transition to a civilian-led presence in Iraq after December 2011. To determine the extent to which DOD has planned for, begun to execute, and mitigated risk associated with facilitating and supporting the transition to a civilian-led presence in Iraq, we reviewed transition-specific planning documents, briefings, and memoranda. We also met with DOD and State officials involved in transition efforts to discuss how DOD and State were coordinating efforts, as well as to discuss the status of activities underway in support of the transition to a civilian-led presence in Iraq. For example, we met with a team of State officials and military liaisons at the Embassy in Baghdad responsible for managing the transition. We also held meetings with the DOD team of officials responsible for coordinating the provision of DOD equipment to State. In addition, we discussed transition efforts during our meetings with officials from a myriad of military commands and DOD organizations, including USF-I, ARCENT, Army Sustainment Command, Defense Logistics Agency, CENTCOM-Joint Theater Support Contracting Command, Defense Contract Management Agency, and the LOGCAP Program Office, among others. To supplement our analysis, we also met with DOD and State officials involved with transition work at a large base in Iraq to observe construction status and to discuss issues associated with the transition. We selected this location based on its status as a large base and because travel was possible during the timeframe of our visit. The team also relied on staff working from the Baghdad Field Office to conduct interviews with officials in theater involved in transition efforts, as well as to attend periodic update meetings, and to regularly update key information. In addition to the contact named above, individuals who made key contributions to this report include Carole F. Coffey, Grace A. Coleman, Gilbert H. Kim, Anne M. McDonough-Hughes, Jason M. Pogacnik, David A. Schmitt, Michael Shaughnessy, Michael Willems, and Matthew R. Young.
The drawdown of U.S. forces in Iraq and the transition from a U.S. military to a civilian-led presence after December 2011 continue amid an uncertain security and political environment. This report is one in a series of reviews regarding the planning and execution of the drawdown. Specifically, this report assesses the extent to which DOD has planned for, begun to execute, and mitigated risk associated with (1) transferring and removing personnel and equipment from remaining bases in Iraq; (2) curtailing unneeded contract services, transitioning expiring contracts, and providing adequate contract oversight; and (3) facilitating and supporting the transition to a civilian-led presence in Iraq. GAO examined relevant DOD planning documents, attended drawdown-related conferences, interviewed State officials and DOD officials throughout the chain of command in the United States, Kuwait, and Iraq, and visited several locations in Kuwait and Iraq to observe drawdown operations. DOD has robust plans and processes for determining the sequence of actions and associated resources necessary to achieve the drawdown from Iraq, which is well underway with a significant amount of equipment removed from Iraq and bases transitioned, among other things. However, several factors contribute to making this phase more challenging than the previous drawdown phase. First, DOD will have less operational flexibility in this phase of the drawdown, yet will need to move a greater amount of equipment than in prior drawdown phases. Second, DOD is closing the largest bases with fewer available resources left on site, which creates a set of challenges and risks greater than what DOD faced during the prior drawdown phase. Although DOD's plans and processes create flexibility and mitigate risk, it has limited visibility over some equipment remaining in Iraq and does not track equipment found on transitioning bases that is not listed on any property accountability record. Without addressing these issues, DOD may miss opportunities to make the drawdown more efficient. DOD has taken action to improve its management of contracts in Iraq, such as enhancing contract oversight and assigning Contracting Officer's Representative responsibilities as a primary duty, although concerns, such as lack of experience among contract oversight personnel, remain. As the drawdown progresses, DOD may face further challenges in ensuring that major contracts transition without gaps in key services. To ensure the continuity of key services while continuing to reduce these services, some units are exploring the option of using local contractors to provide certain services since local contractors do not require extensive support, such as housing, and will not have to be repatriated to their country of origin at the end of the contract, although GAO has previously reported on challenges associated with hiring such firms resulting in the need for greater oversight. Some units also intend to replace contractor personnel with servicemembers to ensure continuity of certain services, such as guard security and generator maintenance. Despite various steps to ease contractor demobilization, DOD faces challenges in demobilizing its contractors, including operational security-driven limits on exchanging information such as base closure dates and ensuring accurate contractor planning. Without taking additional steps to address these challenges, DOD may be unable to effectively implement its demobilization guidance and ensure the effective reduction of contract services to appropriate levels and ultimate demobilization of all its contractors. As the U.S. presence in Iraq transitions to a civilian-led presence, although DOD and State interagency coordination for the transition began late, both agencies have now coordinated extensively and begun to execute the transfer or loan to State of a wide range of DOD equipment, while DOD has taken steps to minimize any impact on unit readiness of such transfers. DOD also has agreed to potentially provide State with extensive contracted services, including base and life support, food and fuel, and maintenance, but State may not have the capacity to fund and oversee these services. GAO recommends that DOD take further action to (1) acquire and maintain real-time visibility over contractor-managed government- owned equipment; (2) collect data on unaccounted-for equipment found during base transitions; (3) work with contractors to gather and distribute information needed to demobilize their workforces; and (4) officially clarify the scope of DOD's role in post-2011 Iraq, to include the privileges and immunities to be afforded all DOD government personnel. DOD concurred with all of GAO's recommendations.
As the primary federal agency that is responsible for protecting and securing GSA facilities and federal employees and visitors across the country, FPS has the authority to enforce federal laws and regulations aimed at protecting federally owned and leased properties and the persons on such property. FPS conducts its mission by providing security services through two types of activities: (1) physical security activities— conducting threat assessments of facilities and recommending risk-based countermeasures aimed at preventing incidents at facilities—and (2) law enforcement activities—proactively patrolling facilities, responding to incidents, conducting criminal investigations, and exercising arrest authority. FPS is also responsible for management and oversight of the approximately 15,000 contract security guards posted at GSA facilities. To conduct its mission, FPS has 11 regional offices across the country and maintains a workforce of both law enforcement staff, and non-law enforcement staff. FPS’s law enforcement staff is generally composed of three occupations—LESOs, who are also called inspectors; police officers; and special agents—each with different roles and responsibilities. As shown in table 1, LESOs are responsible for the majority of FPS’s duties. FPS funds its operations through the collection of security fees charged to FPS’s customers, that is, tenant agencies. However, during fiscal years 2003 through 2006, these fees were not sufficient to cover FPS’s operating costs. When FPS was located in GSA, it received additional support from the Federal Buildings Fund to cover the gap between collections and costs. Fiscal year 2004 was the last year that FPS had access to the Federal Buildings Fund, and despite increases in its security fee, FPS continued to experience a gap between its operational costs and fee collections. To mitigate its funding shortfalls, in 2007 FPS implemented many cost-saving measures, including restricting hiring and travel, limiting training and overtime, suspending employee performance awards, and reducing operating hours. FPS also took steps to reduce its staff levels through voluntary early retirement opportunities, and some staff were assigned on detail to other DHS offices. Also during this time, FPS did not replace positions that were lost to attrition. In June 2008, we reported that the funding challenges FPS faced and its cost-savings actions to address them resulted in adverse implications for its workforce, primarily low morale among staff and increased attrition. To minimize the impact of its funding and operational challenges on its ability to conduct its mission, in early 2007 FPS adopted a new strategic approach to how it conducted its mission. Faced with the reduction of its workforce to 950 full-time employees and the need to maintain its ability to protect federal facilities, FPS announced the adoption of a “LESO- based” workforce model. The model was intended to make more efficient use of its declining staffing levels by increasing focus on FPS’s physical security duties and consolidating law enforcement activities. FPS’s goal was to shift its law enforcement workforce composition from a mix of about 40 percent police officers, about 50 percent LESOs, and about 10 percent special agents—its composition when it was transferred to DHS in fiscal year 2003—to a workforce primarily composed of LESOs and some special agents, with the police officer position being gradually eliminated. To achieve this, FPS began eliminating its police officer position by offering existing police officers the option of applying for LESO positions, which incorporate physical security duties into their existing law enforcement responsibilities. Additionally, FPS eliminated police officers through attrition, and as police officers separated from FPS, their positions were not replaced. In December 2007 the fiscal year 2008 Consolidated Appropriations Act was enacted; it mandated that FPS’s security fees be adjusted to ensure that collections are “sufficient to ensure maintains, by July 31, 2008, not fewer than 1,200 full-time equivalent staff and 900 full-time equivalent who, while working, are directly engaged on a daily basis protecting and enforcing laws at Federal buildings.” To address this mandate, FPS began a large-scale hiring effort to bring on new LESOs by the legislated deadline. Although FPS was no longer working toward reducing the size of its workforce, it did not reverse its strategic direction of maintaining a LESO-based workforce. Appropriations are presumed to be annual appropriations and applicable to the fiscal year unless specified to the contrary. The requirement for no fewer than 1,200 full-time-equivalent staff, including 900 full-time law enforcement staff in DHS’s 2008 appropriations act was effective for 2008. DHS’s appropriations act for 2009 contains the same requirement relating to FPS’s staffing level and is effective for fiscal year 2009. The President’s budget for fiscal year 2010 requests that a staffing level of 1,225 be maintained in 2010; it also proposes relocating FPS from the Immigration and Customs Enforcement (ICE) component of DHS to the National Protection and Programs Directorate (NPPD) of DHS. While FPS is currently operating at its mandated staffing level, its hiring process met with delays and challenges. FPS was required to have at least 1,200 full-time employees, including 900 law enforcement employees, on board by July 31, 2008. This same requirement for FPS was included in DHS’s fiscal year 2009 appropriations act, and FPS met this staffing level in April 2009, with 1,239 employees on board, including 929 law enforcement staff, by hiring 187 new LESOs. According to human capital officials, FPS did not experience any problems recruiting for its LESO position, receiving over 6,000 applications. However, officials told us that FPS was not able to meet the July 31, 2008, mandate because of the challenges related to shifting its priorities from downsizing its workforce to increasing it to comply with the mandate, inexperience working with DHS’s shared service center, and delays in its candidate screening process. Since transferring to DHS, FPS has been in a period of strategic transition—not only reducing its workforce size, but also changing its composition to a LESO-based workforce. Faced with funding challenges, FPS’s human capital efforts were aimed at cutting costs and reducing the size of its workforce to a total staffing level of 950 full-time employees. FPS was on its way to achieving this goal, and had reduced its workforce to 1,061 employees in February 2008 when it changed course to respond to the mandate and increase its workforce to 1,200. According to FPS, these continual shifts affected the agency’s ability to meet the staffing level mandated in the 2008 Consolidated Appropriations Act. FPS’s ability to meet the mandate was also affected by its inexperience in working with DHS’s shared service center. After transferring to DHS, the majority of FPS’s hiring requirements are contracted out to the U.S. Customs and Border Protection Human Resources Management Center in Laguna Niguel, California (Laguna), which provides human resource services to all components of ICE through DHS’s administrative shared services program. Laguna is responsible for providing a full range of human resource services to FPS, including processing actions related to employee hiring, separation, benefits, and job classification. According to officials at Laguna, there have been some challenges working with FPS; primarily, officials told us that it is unclear what FPS’s human capital needs are and where the agency is headed. Additionally, officials at Laguna said that FPS changes its human resource needs on a day-to-day basis and is constantly changing its priorities, causing Laguna to expend a lot of time and manpower in trying to meet the agency’s needs. Additionally, officials said the high turnover in FPS management in its headquarters office has contributed to this lack of understanding. Finally, FPS also experienced delays in the candidate screening process that hampered its ability to meet the mandate. According to FPS officials, its hiring process can take 5 to 6 months to complete; however, under the mandate it was given 7 months to bring new staff on board; thus it was challenged to meet this mandate. Consequently, it experienced significant delays in screening potential candidates, particularly delays in the medical screening component, which Laguna contracts out to a private company. The screening process—which consists of drug testing, a background security clearance, and a medical screening—should take approximately 30 to 60 days. FPS officials told us that delays in the medical portion of the screening caused the process to take 90 to 100 days. According to FPS officials, they are working with the contractors to address problems. For example, FPS officials are working to determine if it is possible for candidates recently separated from the military to receive a waiver for the medical screening if they have recently undergone a military medical examination. See figure 1 for a timeline of the FPS hiring process. We have identified human capital management, including the hiring process, as an area in which DHS has significant management challenges. In our 2007 progress report on DHS’s management challenges, we found that DHS had made limited progress in managing its human capital. With regard to a timely hiring process, we found that while DHS has developed a 45-day hiring model, and provided it to all of its component agencies, DHS does not assess the component agencies against this model. The prolonged time it takes to select and hire FPS LESOs further demonstrates the limited progress DHS’s components have made in meeting the goals of this model. FPS has experienced delays in its LESO training program. Almost 16 months after FPS began hiring new LESOs, almost half of them have not completed the required law enforcement training and therefore are not permitted to conduct any law enforcement components of their jobs, including carrying firearms or exercising arrest and search authorities. Of these 187 new LESOs, almost all—95 percent—have completed the physical security training that is required to conduct a BSA. Conducting BSAs is the core function of the LESO position, and BSAs are used by FPS to determine and recommend countermeasures to protect federal facilities. In addition to hiring new LESOs, FPS converted 105 police officers to the LESO position, and while all police officers are already trained in law enforcement, 25 percent of the 105 police officers FPS promoted to LESO positions have not completed physical security training, and therefore are not eligible to conduct BSAs or recommend countermeasures, their key responsibilities. This training is essential to support FPS’s new strategic direction, and in his June 2008 testimony, the Director of FPS indicated physical security responsibilities, such as completing BSAs, will account for 80 percent of a LESO’s duties. According to FPS, depending on class availability, it expects to have all new hire and converted LESOs fully trained by September 2009. According to FPS officials, LESOs that have not completed the physical security training are assisting experienced LESOs in completing their BSAs. During our site visits to FPS’s regions, we were told that not having all LESOs fully trained caused a strain on FPS’s resources, with LESOs taking on increased workloads. We also spoke with new LESOs in two regions who told us that while they have not received physical security training, they were conducting BSAs with little or no oversight from senior staff. FPS officials told us the reason training has been delayed is that it had submitted and finalized its training schedule with the Federal Law Enforcement Training Center (FLETC) over 1 year before it was mandated to increase its staff numbers, and adding additional classes after this time was a challenge because of limited space and instructors at FLETC. Officials said that FLETC is doing its best to accommodate the number of new hires and converts FPS is sending for training. According to FPS officials, FLETC is holding its Physical Security Training Program once every month back to back. Each class has a maximum of 24 students, and in the past, at most FLETC held three Physical Security Training Programs each year. Moreover, FPS has taken limited steps to provide ongoing physical security training to existing LESOs, a fact that limits the functionality of experienced LESOs. FPS is currently in the process of developing a biannual physical security training program to ensure that LESOs are current in their knowledge of physical security standards and technology. LESOs and regional officials we met with during our site visits told us they did not feel the current level of physical security training was adequate. According to FPS officials, the design and methodology for the new Physical Security Refresher Training Program have been completed, and a headquarters position dedicated to managing the agency’s training program has been created, but the program has not been implemented and as of July 2009 there is no expected date for implementation. While FPS has reached the mandated staffing levels, FPS continues to have a high attrition rate, and about 30 percent of its employees are eligible to retire in the next 5 years. Since fiscal year 2005, it has experienced increases in its overall attrition rate. As we previously reported, FPS experienced funding challenges in the first few years of its transition to DHS, and took steps to mitigate these challenges by reducing the size of its workforce. For example, FPS offered its employees Voluntary Early Retirement Authority, as well as detailing employees to other DHS components. FPS’s attrition rate peaked at over 11 percent in fiscal year 2007, and while it began declining once FPS halted its downsizing efforts, in fiscal year 2008 it was 9 percent, which was higher than the average rate of the federal government and ICE, but lower than DHS’s. In addition, about 30 percent of FPS’s workforce—360 employees—are eligible to retire by 2014, a fact that when combined with its attrition rates could place additional demands on FPS’s hiring process. See figure 2 for a comparison of FPS’s attrition rates with those of the federal government and DHS. FPS currently does not have a strategic human capital plan to guide its current and future workforce planning efforts. Our work has shown that a strategic human capital plan addresses two critical needs: It (1) aligns an organization’s human capital program with its current and emerging mission and programmatic goals, and (2) develops long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. In 2007, FPS took steps toward developing a Workforce Transition Plan to reflect its decision to move to a LESO-based workforce and reduce its workforce to about 950 employees. These steps included the following: identifying skill sets needed to transition employees, including conducting focus groups with senior managers to determine what skills are needed in regions and headquarters and establishing a core curriculum and a career path for FPS occupations in categories of mission support, law enforcement, and supervisory positions; identifying the number of employees who meet current skill set requirements and those who require training and type of training needed; and establishing a project plan to transition employees, including determination of employees eligible for retirement; establishing strategies for use of human capital flexibilities such as bonuses and relocation allowances; and establishing recruitment and retention strategies. However, in 2008, FPS discontinued this plan because the objective of the plan—to reduce FPS staff to 950 to meet the President’s Fiscal Year 2008 Budget—was no longer relevant because of the congressional mandate to increase its workforce to 1,200 employees. FPS subsequently identified steps it needed to take in response to the mandate. However, we found that these efforts do not include developing strategies for determining agency staffing needs, identifying gaps in workforce critical skills and competencies, developing strategies for use of human capital flexibilities, or strategies for retention and succession planning. Additionally, the lack of a current human capital plan has contributed to inconsistent approaches in how FPS regions and headquarters are managing human capital activities for the agency. FPS officials in three of the five of the regions we visited said they implement their own strategies for managing their workforce, including processes for performance feedback, training, and mentoring. For example, one region we visited developed its own operating procedures for a field training program, and has received limited guidance from headquarters on how the program should be conducted. Officials in this region have also taken the initiative in several areas to develop specific guidance and provide employees with feedback on their performance in several areas. Another region we visited offers inspectors supplemental training in addition to required training. This region also requires new inspectors to complete a mentoring program in which they accompany an experienced inspector and are evaluated on all the aspects of their job. Similarly, a third region we visited has an informal mentoring program for the police officers that were promoted to inspectors. Each newly promoted inspector was paired with a senior inspector. Additionally, we found FPS’s headquarters does not collect data on its workforce’s knowledge, skills, and abilities. Consequently, FPS cannot determine what its optimal staffing levels should be or identify gaps in its workforce needs and determine how to modify its workforce planning strategies to fill these gaps. Effective workforce planning requires consistent agencywide data on the critical skills needed to achieve current and future programmatic goals and objectives. FPS’s human capital activities are performed by a DHS shared service center managed by U.S. Customs and Border Protection Personnel Systems Division in Laguna Niguel, California. This shared service center provides FPS headquarters with biweekly reports on FPS’s workforce statistics such as workforce demographics, and attrition and hiring data by occupation. These reports do not provide insight on FPS’s workforce’s knowledge, skills, and abilities—information that is key in identifying workforce gaps and engaging in ongoing staff development. In addition to the official data maintained by the shared service center, each FPS region maintains its own workforce data. Without the collection of centralized or standardized data on its workforce, it is unclear how FPS can engage in short- and long- term strategic workforce planning. FPS’s Risk Assessment and Management Program (RAMP) system is intended to address some of these concerns, but this project has met with numerous delays, and according to FPS officials, data will not be available until fiscal year 2011. Additionally, FPS’s human capital challenges may be further exacerbated by the proposal in the President’s 2010 budget to move FPS from ICE to NPPD. If the move is approved, it is unclear which agency will perform the human capital function for FPS, or how the move will affect FPS’s operational and workforce needs. GAO has developed a model of strategic human capital planning to help agency leaders effectively use their personnel and determine how well they integrate human capital considerations into daily decision making and planning for the program results they seek to achieve. Under the principles of effective workforce planning, an agency should determine the critical skills and competencies that will be needed to achieve current and future programmatic results. Then the agency should develop strategies tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. GAO has identified five key principles that should be addressed in an agency’s strategic human capital planning. See table 2 for key principles and examples of how an agency can implement these principles. On the basis of our generalizable survey of building security committee chairs and designated officials in facilities protected by FPS, we found that FPS customers had mixed views about the law enforcement and physical security services they paid FPS to provide. In order for FPS to carry out its mission of protecting federal buildings and the people in those buildings, FPS is authorized to collect security fees from the agencies it protects for law enforcement and physical security services. In fiscal year 2008, FPS’s customers paid approximately $187 million for basic security services, such as preparing BSAs, responding to incidents, and providing advice and assistance to building security committees. Our survey, which ended in May 2009, asked FPS customers how satisfied they were with a variety of services they pay FPS to provide. Overall, survey results showed that 58 percent were satisfied or very satisfied with FPS’s current level of service, 7 percent were dissatisfied, 18 percent were neutral, and 17 percent were not able to comment on FPS’s current level of service. However, our survey also showed that some of FPS’s customers could not evaluate specific services. For example, according to our survey, an estimated 28 percent of FPS’s customers were satisfied with FPS’s response time to emergencies at their facility, while 6 percent were dissatisfied or very dissatisfied, 11 percent were neutral, and 55 percent indicated that they could not comment, to some extent because there may not have been such an incident at their facility. Additionally, our survey suggests that some of FPS’s customers may not be satisfied with FPS’s decision to eliminate its police officer position and move to a LESO-based workforce, since 22 percent of FPS customers thought there were too few patrols of their facility by FPS police officers or LESOs, while no customers indicated that there were too many, 21 percent said about right, and 57 percent were unable to comment. (See app. II for complete questionnaire tabulations.) Our survey also suggests that the communication between FPS and its customers about roles and responsibilities is unclear, in part because on average one third of FPS’s customers could not comment on how satisfied or dissatisfied they were with FPS’s level of communication on its services, as shown in table 3. For example, an estimated 35 percent of FPS customers could not evaluate FPS’s level of communication about services it can offer tenant agencies and 12 percent were dissatisfied or very dissatisfied. Additionally, an estimated 36 percent of FPS customers had no basis to report on the frequency with which FPS officials attended meetings about the security of their facility, while about 22 percent indicated that FPS never attends and 18 percent reported rare attendance. Respondents that provided comments on our survey indicated they could not evaluate FPS’s services mainly because they had little to no interaction with FPS. For example: A respondent commented that he/she had little or no contact with FPS, because the closest FPS office is approximately 150 miles away; additionally this official noted that he/she was not aware of any services provided by FPS. A respondent in a leased facility commented that FPS has very limited resources and the resources that are available are assigned to the primary federally owned building in the region. A respondent commented that during his/her tenure of 12 years, this official remembered only one visit from an FPS officer. With the exception of meetings to discuss BSA reports, which should occur at least every 2 to 4 years, depending on the security level of the facility, according to FPS officials, FPS does not have policies regarding the frequency with which FPS LESOs should visit or patrol a customer’s facility. However, according to our survey, an estimated 12 percent of FPS customers indicated that FPS had not conducted a BSA within the past 5 years and about 24 percent did not know if one had been conducted, and of those customers who indicated a BSA had been conducted, not all of them were briefed on the results. Although FPS and GSA have an agreement that outlines the services FPS will provide customers in GSA facilities, some customers were not aware of the services FPS provides and the fees that they paid for such services. For instance, a customer in a federally owned building in a remote location did not know that FPS provided 24-hour alarm-monitoring services, because FPS had not visited the office in over 2 years; as a result the customer purchased an alarm system that was not compatible with FPS’s monitoring system. Another customer we spoke to in leased facilities told us that he/she had less of a need for FPS, because he/she was in a leased facility and relied on either law enforcement officers or physical security specialists from his/her own agency. When we followed up with 10 customers who could not comment on FPS’s services, we found that 6 of the 10 customers were unaware of the fees they paid FPS, and 4 of the 10 reported that FPS does not provide their facility services. For example, one customer we spoke to told us that she did not know what FPS’s role was with respect to the security of her facility and did not realize that her agency paid FPS a security fee. GSA officials also told us that they have received complaints from customers that they do not know what services they were getting for the basic security fees they paid FPS. For instance, a customer at a large government-owned complex was not satisfied with FPS’s security recommendation to add security guard posts at the facility for a fee of up to $300,000 in addition to the approximately $800,000 in basic security fees the customer was already paying FPS annually, because the customer reported never seeing FPS officers as part of the basic security fees they paid, according to a GSA official. In addition to our survey findings about the extent customers relied on FPS for services, others have found that while FPS is the primary federal agency responsible for protecting GSA facilities, federal agencies were taking steps to meet their security needs using other sources. GSA officials told us that some federal agencies have not been satisfied with FPS’s building security assessments and have started conducting their own assessments. A few agencies have also requested delegations of authority for their buildings from FPS, including the National Archives and Records Administration and the Office of Personnel Management, according to GSA. Specifically, although the U.S. Marshals Service has delegated authority for building security of federal courthouses, according to officials from the Marshals Service, it started a perimeter security pilot program in October 2008 for courthouses in six cities, because of concerns with the quality of service provided by FPS contract guards at federal courthouses. Additionally, a 2006 study by ICE found that federal agencies were actively seeking delegations of authority because of increased overhead costs and agencies wanted more control over the security within their buildings. However, even with delegations of authority for security from FPS, agencies are still expected to pay FPS’s fee for basic security services. Moreover, the Office of Management and Budget’s 2007 assessment of FPS found that the services provided by FPS were redundant and duplicative of other federal efforts, because many federal agencies—including the U.S. Marshals Court Security, Secret Service, and the Capitol Police—had their own security offices. GSA has not been satisfied with the level of service FPS has provided and expressed some concerns about its performance since it transferred to DHS. As GSA owns and leases over 9,000 facilities FPS protects, GSA officials told us that they have a vested interest in the security of these facilities. According to GSA officials, FPS has not been able to provide the level of service GSA expects based on the existing memorandum of agreement between the two agencies. For example, GSA officials said FPS has not been responsive and timely in providing assessments for new leases, a fact that delayed negotiations and procurement of space for tenant agencies. According to FPS, it does not consistently receive notification of pre-lease assessments from GSA, and although FPS is working on developing an interface as part of RAMP to ensure that information is received and appropriately routed for action, this program has been delayed. GSA officials were also concerned about the lack of consistency in the BSA process. Specifically, GSA officials told us that the quality of a BSA can vary depending on the LESO conducting the assessment. While FPS and GSA have taken steps to improve information sharing, communication and coordination continue to be a challenge for them. As we recently reported, at the national level, FPS and GSA have established some formal channels for sharing information such as holding biweekly meetings, serving on working groups focused on security, and forming a joint Executive Advisory Council, which provides a vehicle for FPS, GSA, and customers to work together to identify common problems and devise solutions. However, GSA officials have been frustrated with FPS’s level of communication. Specifically, these officials said that although the frequency of communication has increased, meetings with FPS are not productive because FPS does not contribute to planning the discussions, bringing up issues, or following up on discussion items as promised. Additionally, while FPS’s Director views GSA as a partner, GSA officials said communication with FPS staff at levels below senior management has remained difficult and unchanged. Furthermore, FPS and GSA have not been able to reach an agreement about revisions to their current agreement, which according to GSA officials, does not include requirements regarding communication and measures that ensure the needs of customers are met. Although FPS is responsible for the protection of over 9,000 facilities owned and leased by GSA, it does not have complete and accurate contact data for the customers in these facilities who are responsible for working with FPS to identify security issues and implement security standards for their facility, typically the building security committee chair or a designated official. During the course of our review, we found that approximately 53 percent of the e-mail addresses and 27 percent of the telephone numbers for designated points of contacts were missing from FPS’s contact database. Additionally, while FPS was able to provide us a sufficient amount of contact information to conduct our survey, some of the customer data we received for our survey sample were either outdated or incorrect. For example, approximately 18 percent of the survey notification e-mails we sent to customers in our sample were returned as undeliverable. When we attempted to obtain correct e-mail addresses, we found that some of the contacts FPS provided had retired or were no longer with the agency. In some instances, we found that the e-mail address FPS provided was incorrect, because of human errors such as the misspelling of the customer’s name. Additionally, our follow-up calls to over 600 sample customers to check on the status of the survey found that FPS did not have the correct telephone numbers for about one-third of these customers, and more than 100 customers provided us with updated contact information. While FPS acknowledges that it needs to improve customer service and has developed some initiatives to increase customer education and outreach, it will continue to face challenges implementing these initiatives without complete and accurate customer contact information. Specifically, one of FPS’s three guiding principles in its strategic plan is to foster coordination and information sharing with stakeholders and strive to anticipate stakeholder needs to ensure it is providing the highest level of service and has taken steps to achieve this goal. For instance, in 2007, FPS conducted four focus group sessions to solicit customer input, but this effort was limited to 4 of 11 FPS regions, with a total of 22 customers participating in the discussion. Additionally, FPS developed and distributed four stakeholder newsletters as a result of the focus group sessions. According to FPS, the newsletter was distributed to members of FPS and GSA’s Executive Advisory Council as well as to 201 other director-level officials from various federal agencies. FPS’s marketing and communications strategy identifies initiatives focused on improving customer service. For example, FPS plans to administer its own customer satisfaction survey with assistance from GSA. FPS’s RAMP system is also expected to help improve customer service and allow LESOs to be more customer focused. In particular, RAMP will include a customer relations module that will allow FPS LESOs to better manage their relationship with customers by enabling them to input and access customer information such as building contacts and preferences for meeting times. However, it will be difficult for FPS to implement these or any customer service initiatives before taking steps to ensure it has complete and accurate contact information for all the facilities it protects. Furthermore, our prior work has shown that effective security requires people to work together to implement policies, processes, and procedures. Therefore without existing information to contact building security committees or officials responsible for security issues, FPS cannot effectively work with customers to ensure federal buildings are secure by communicating critical policies or emergency information such as threats to facilities. In recent years FPS’s human capital efforts have primarily focused on downsizing its workforce and reducing costs. In December 2007, FPS’s funding challenges were mitigated and it began increasing its workforce to meet a mandated deadline. While FPS’s short-term hiring efforts met with some success, because of its attrition rates and number of employees eligible to retire in 5 years, FPS needs to continue to focus on improving its hiring and training processes. We have identified human capital management as a high-risk issue throughout the federal government, and particularly within DHS. FPS’s hiring challenges further serve as an example of the importance of improving these processes. Without a long- term strategy for managing its current and future workforce needs, including effective processes for hiring, training, and staff development, FPS will be challenged to align its personnel with its programmatic goals. The President’s 2010 budget proposes to transfer FPS from ICE to DHS’s National Protection and Programs Directorate and presents FPS with a prime opportunity to take the initial steps required to develop a long-term strategic approach to managing its workforce. However, until FPS begins collecting data on its workforce’s knowledge, skills, and abilities, FPS will not be able to start and complete this process. While FPS customers paid about $187 million dollars in fiscal year 2008 for law enforcement and physical security services, and given the fact that our survey showed some customers are unaware of or do not use the services they are paying for, it is particularly important that FPS enhance its interaction with its customers. FPS acknowledges the need for improvement in its customer service, and has taken some initial steps toward improvement. Until benefits of these actions are realized by customers—something that cannot occur until FPS collects complete and accurate contact data for the facilities it provides service to, and establishes a process for reaching out to and educating customers on the services they should be receiving—customers will continue to raise questions about the quality of service they are receiving. To facilitate effective strategic management of its workforce, we recommend that the Secretary of Homeland Security direct the Director of FPS to take the following actions: improve how FPS headquarters collects data on its workforce’s knowledge, skills, and abilities to help it better manage and understand current and future workforce needs, and use these data in the development and implementation of a long-term strategic human capital plan that addresses key principles for effective strategic workforce planning, including establishing programs, policies, and practices that will enable the agency to recruit, develop, and retain a qualified workforce. To improve service to all of its customers, FPS should collect and maintain an accurate and comprehensive list of all facility- designated points of contact, as well as a system for regularly updating this list, and develop and implement a program for education and outreach to all customers to ensure they are aware of the current roles, responsibilities, and services provided by FPS. We provided a draft of this report to DHS and GSA for review and comment. DHS concurred with the report’s findings and recommendations, and provided us with technical comments. GSA had no comment. DHS’s comments can be found in appendix III. We are sending copies of this report to appropriate committees, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http//www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report examines the workforce and human capital processes and planning efforts of the Federal Protective Service (FPS). Specifically, our objectives were to provide information (1) on the extent that FPS has hired and trained new staff to address its mandated staffing levels, (2) on the extent that FPS has developed a strategic human capital plan to manage its current and future workforce needs, and (3) on the satisfaction of FPS’s customers with its services. Our work was initially designed to address congressional concerns about FPS’s staffing composition and level since it transferred to the Department of Homeland Security (DHS), and its human capital polices and procedures for hiring and retaining a qualified workforce. Since this work was requested, DHS’s 2008 and 2009 appropriations acts mandated FPS to ensure fee collections were sufficient to maintain no fewer than 1,200 full-time equivalents, including 900 law enforcement positions. We also reported that some tenant agencies and stakeholders were concerned about the quality and cost of security provided by FPS since it transferred to DHS. Our findings raised questions about equity in which FPS has been providing services to customers across the country in facilities with different security needs. In light of these events and our recent findings, we expanded the focus of our review to include an assessment of FPS’s efforts to meet the congressional mandate, steps it has taken to address customer concerns, and FPS’s customer satisfaction with its services. To respond to the overall objectives of this report, we interviewed officials from FPS, DHS, and the General Services Administration (GSA). We also reviewed relevant laws, and FPS, DHS, and GAO documents related to workforce planning and human capital management. We conducted site visits at 5 of FPS’s 11 regional offices; while the results of these visits are not generalizable, these 5 site visits accounted for about 50 percent of the 9,000 facilities FPS is responsible for providing service. During our site visits, we met with FPS regional law enforcement and human capital managers as well as new and experienced law enforcement security officers (LESO) to gain an understanding of how FPS’s recent workforce changes have affected FPS’s operations, actions each regional office has taken to address these effects, and how regional offices determine workforce needs. We also discussed the regions’ role in the agency’s human capital planning. To assess the extent to which FPS is fully operational and has met staffing levels required by Congress, we interviewed officials in FPS’s headquarters and officials from DHS Customs and Border Protection Human Resources Management Center in Laguna Niguel, California (Laguna), who were responsible for managing, overseeing, and implementing personnel actions for FPS, to understand the actions FPS took, and challenges faced, to meet the mandate. We also reviewed and analyzed FPS workforce data, such as hiring, attrition, separation, and retirement eligibility, by using the Office of Personnel Management’s (OPM) Central Personnel Data File (CPDF). We also identify trends in attrition data for FPS employees from fiscal years 2005 through 2008 and compared that information with that of the rest of the federal government and DHS during the same time period. To assess the reliability of OPM’s CPDF, we reviewed GAO’s prior data reliability work on CPDF. We also requested attrition and other workforce data from Laguna, which administers FPS’s personnel a to determine the extent to which CPDF data matched the agency’s data . When we compared the CPDF data with the data provided by Laguna on FPS personnel, we found that data provided by Laguna were sufficiently similar to the CPDF data and concluded that the CPDF data were sufficiently reliable for the purposes of our review. However, we did not independently verify the workforce data we received from Laguna. To calculate the attrition rates for each fiscal year, we divided the total number of separations from each agency or DHS component by the average of the number of employees in the CPDF at the beginning of the fiscal year plus the number at the end of the fiscal year. To place the overall attrition rates for FPS in context, we compared FPS’s rates with those for federal employees in the Immigration and Customs Enforcement (ICE) a component agency within DHS, DHS as a whole, and the rest of government. For the purposes of this report, DHS’s attrition rates were calculated omitting ICE’s attrition rate (including that for FPS), and ICE’s attrition rates were calculated omitting FPS’s attrition. To determine the extent to which FPS has developed a plan to manage its current and future workforce needs, we reviewed and analyzed FPS and ICE documents related to human capital planning, vacancies for critical positions, and workforce models. We interviewed FPS officials regarding efforts to (1) develop and implement a long-term strategic human capital plan, (2) identify and fill critical vacancies, (3) and analyze current and future workforce needs. We then compared FPS’s efforts with Key Principles of Effective Strategic Workforce Planning identified by GAO. To assess FPS’s customer satisfaction with its services, we reviewed the existing memorandum of agreement between DHS and GSA, which outlines the services FPS provides GSA and other federal customers in GSA-controlled buildings. We also met with GSA officials in its central and regional offices to determine their level of satisfaction with FPS’s services and specific actions FPS and GSA have taken to ensure effective communication and coordination. Additionally, we reviewed FPS documents related to customer communication and outreach. In addition, we conducted a Web-based survey of FPS customers in GSA- owned and leased buildings. For the purpose of our survey, we defined FPS customers as building security committee chairpersons and designated officials. We focused on building security committee chairpersons and designated officials, because these officials are responsible for working with FPS to identify security issues and implement minimum security standards for their buildings. The survey sought information pertaining to FPS’s law enforcement and physical security services, customers’ perspectives on the level of service FPS has provided, and observed changes in services over the past 5 years. To identify the appropriate officials to respond to the survey, we constructed our population of FPS customers in GSA-owned and leased facilities from GSA’s facilities database as of October 2008, an action that resulted in over 9,000 GSA-controlled facilities, and matched customer contact information from FPS’s database using GSA-assigned building numbers. We excluded about 670 facilities with data errors or anomalies pertaining to the security level of the facility as well as security level V facilities, because FPS does not have responsibility for protecting any level V buildings. On the basis of our discussions with GSA officials about the types of facilities in their inventory, we also excluded approximately 1,900 facilities that generally had either (1) few to no occupants; (2) limited use; or (3) no need for public access, such as warehouses, storage, and parking facilities, and this resulted in a study population of about 6,422 facilities. We selected a stratified random sample of 1,398 facilities from this study population where the strata were defined by region. Table 4 summarizes the sample and sample disposition for each of the strata. As summarized in table 4, we received responses from customers at 760 of the selected facilities (26 of which were out of scope, leaving 734 respondents belonging to our study population), for an overall weighted response rate of approximately 55 percent. We attributed this response rate as mainly due to outdated or inaccurate FPS contact data. Our initial survey notification e-mail to customers in our sample of 1,398 customers resulted in approximately 18 percent undeliverable e-mails. Our attempts to obtain e-mail addresses for these customers showed that FPS’s data were outdated and inaccurate, because some customers had retired or left the agency. In addition, when we attempted to contact customers to encourage their participation in our survey, we found that FPS did not have the correct telephone numbers for over 200 of the 683 customers that did not respond to our survey. In addition to examining the response rates by sampling strata, we also examined the weighted response rates for other subgroups of the population and did not find wide variations in response rate by a building’s security level, whether or not it was leased, or whether it was a single or multitenant building. We used the information gathered in this survey to calculate estimates about the entire study population of FPS customers in GSA-owned and leased buildings. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from this survey have 95 percent confidence intervals of within plus or minus 5 percentage points of the estimated percentage, unless otherwise noted. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. For example, we met with security officials from GSA who were knowledgeable about the roles and responsibilities of FPS and building security committees to gain an understanding of the types of services FPS should be providing customers and to discuss the feasibility of surveying customers in different types of buildings (i.e., leased versus government owned). We also pretested the questionnaire with five building security chairs to ensure the questions were consistently interpreted and understandable. We also corresponded with over 100 customers who contacted us to provide updated contact information. During these conversations, we discussed the relationship between FPS and building security committees/designated officials, including FPS’s roles and responsibilities. In addition, we also followed up with 10 more customers who had no basis to judge FPS’s overall level of service to gain an understanding of their responses to our survey questions and to gather information on aspects of FPS’s awareness and outreach efforts. Specifically, we asked them about the types of information they receive from FPS about changes to its services and fee structure as well as actions FPS has taken to solicit their input. A copy of the survey questions and a complete tabulation of the results can be found in appendix II. The questions we asked in our survey on FPS’s services are shown below, and the percentages in parentheses indicate the proportion of respondents that chose that particular answer. Unless otherwise noted, all percentages shown are survey estimates that have 95 percent confidence intervals of within plus or minus 5 percentage points of the estimate itself. Please answer all questions based your experience with the security at , with building number . If you normally seek advice or support from Security/Law Enforcement/Physical Security Specialists to fulfill your duties as the Building Security Committee Chairperson or Designated Official, please feel free to seek their input to respond to this survey. 1. What agency do you work for? 2. Which personnel function best describes your primary position within your agency? (Select one.) 1. Security personnel (12%) 2. Human resources personnel (0%) 3. Finance personnel (1%) 4. Management (70%) 5. Other (17%) If you answered “Other” above, please specify: 3. How long have you been the Building Security Committee Chairperson/Designated Official for _____? (Select one.) 1. Less than a year (13%) 2. More than 1, but less than 2 years (12%) 3. More than 2, but less than 5 years (32%) 4. 5 or more years (44%) 4. What is the Department of Justice assigned security level at _____? Please indicate the security level at _____ under the 1995 Department of Justice standards, even if the facility has been re-assigned a new security level under the 2008 Interagency Security Committee Standards for Facility Security Level Determinations For Federal Facilities (Select one.) 1. Level I (6%) 2. Level II (22%) 3. Level III (10%) 4. Level IV (10% 5. Level V (1%) 6. Do not know (50%) 5. Is _____ a government owned or a leased facility? (Select one.) 1. Government owned facility (19%) 2. Leased facility (80%) 3. Do not know (1%) 6. Is _____ a single or multi-tenant agency facility? (Select one.) 1. Single tenant (34%) 2. Multi-tenant (65%) 3. Do not know (1%) 7. Does your agency have delegated authority for any of the following security services? (Please check all that apply.) 1. 2. 3. 4. 5. 6. Checked Not Checked 87% 90% 90% 82% 94% 61% 7. If you answered “Other” above, please specify: Law Enforcement and Physical Security Providers 8. What law enforcement agency do you consider the primary provider of law enforcement services that require an immediate response to an emergency, such as responding to violent crimes and life threatening incidents, at _____? (Select one.) 1. Federal Protective Service (uniformed police officers and inspectors) (18%) 2. State law enforcement agency (3%) 3. Local law enforcement agency (66%) 4. Other (13%) If you answered “Other” above, please specify: 9. What law enforcement agency do you consider the primary provider of law enforcement services that do not require an immediate response such as enforcing laws and regulations at _____? (Select one.) 1. Federal Protective Service (uniformed police officers and inspectors) (49%) 2. State law enforcement agency (5%) 3. Local law enforcement agency (36%) 4. Other (11%) If you answered “Other” above, please specify: 10. What agency/organization do you consider the primary provider of physical security such as an on-site evaluation and analysis of security at _____? (Select one.) 1. Federal Protective Service (uniformed police officers and inspectors) (48%) 2. General Services Administration, Building Security & Policy Division (12%) 3. My agency’s own internal office (29%) 4. Other (11%) Checked Not checked 68%32% police officers and inspectors) 2. General Services Administration, Building 3. My agency’s own internal office 4. Other If you answered “Other” above, please specify: 12. FPS Provided Private Security Guard Service The following questions are about any service provided by private security guards stationed at your facility that are obtained through a contractual agreement with FPS. If there are no contract security guards provided by FPS at _____, answer NO to question 12 and skip to the next section. 13. Does the Federal Protective Service (FPS) provide private security guards at _____? (Select one.) 1. Yes (36%) 2. No - Skip to question 14. (61%) 3. Do not know - Skip to question 14. (3%) 14. How satisfied are you with the service provided by the security guard(s) at _____? (Select one.) 1. Very satisfied (42%) 2. Satisfied (45%) 3. Neutral (9%) 4. Dissatisfied (3%) 5. Very dissatisfied (0%) 6. No basis to judge/Not applicable (1%) The following questions are about the services provided by Federal Protective Service (FPS) police officers and inspectors. 1. Overall, how satisfied are you with the current level of service provided by the FPS? (Select one.) 1. Very satisfied (25%) 2. Satisfied (34%) 3. Neutral (18%) 4. Dissatisfied (5%) 5. Very dissatisfied (2%) 6. No basis to judge/Not applicable (17%) 15. In your opinion, how has the quality of the following FPS basic security services changed over the past 5-years? (Select one for each row.) a. Law enforcement services that require an immediate response to emergencies such as responding to crimes and incidents Greatly improved (3%) Improved (10%) Stayed about the same (37%) Declined (4%) Greatly declined (2%) No basis to judge/Not applicable (45%) b. Other law enforcement services such as patrolling the facility and enforcing federal laws and regulations Greatly improved (2%) Improved (9%) Stayed about the same (36%) Declined (5%) Greatly declined (3%) No basis to judge/Not applicable (45%) Greatly improved (3%) Improved (18%) Stayed about the same (44%) Declined (4%) Greatly declined (2%) No basis to judge/Not applicable (29%) d. Assistance with security plans, such as Occupant Emergency Plans (OEP) and Continuity of Operations Plans (COOP) Greatly improved (2%) Improved (12%) Stayed about the same (33%) Declined (5%) Greatly declined (3%) No basis to judge/Not applicable (45%) 16. In your opinion, how has the quality of the following FPS building specific services changed over the past 5-years? (Select one for each row.) a. Management of security guards - acquisition and monitoring of guards from a private company contracted by FPS for security services Greatly improved (3%) Improved (9%) Stayed about the same (24%) Declined (4%) Greatly declined (1%) No basis to judge/Not applicable (58%) b. Installing, operating, maintaining, and/or repairing security equipment, such as x-ray machines, closed-circuit televisions and cameras, and alarm systems Greatly improved (2%) Improved (7%) Stayed about the same (17%) Declined (4%) Greatly declined (4%) No basis to judge/Not applicable (66%) c. Consultation on security fixtures, such as vehicular barriers, gates, locks, parking lot fencing, and guard booths Greatly improved (2%) Improved (8%) Stayed about the same (22%) Declined (5%) Greatly declined (2%) No basis to judge/Not applicable (62%) 17. How often does FPS attend meetings regarding the security at _____, including meetings about Building Security Assessments and countermeasures? (Select one.) 1. Always (11%) 2. Sometimes (13%) 3. Rarely (18%) 4. Never (22%) 5. No basis to judge/Not applicable (36%) 18. How satisfied are you with FPS police officers’ or inspectors’ current ability to perform the following activities? (Select one for each row.) a. Respond to incidents at your facility Very satisfied (11%) Satisfied (29%) Neutral (12%) Dissatisfied (6%) Very dissatisfied (3%) No basis to judge/Not applicable (39%) b. Patrol your facility Very satisfied (6%) Satisfied (18%) Neutral (15%) Dissatisfied (6%) Very dissatisfied (5%) No basis to judge/Not applicable (50%) c. Provide crime prevention and security trainings for tenant Very satisfied (7%) Satisfied (17%) Neutral (17%) Dissatisfied (6%) Very dissatisfied (4%) No basis to judge/Not applicable (49%) 19. Over the past 5-years, how has FPS police officers’ or inspectors’ ability to perform to the following activities changed? (Select one for each row.) a. Respond to incidents at your facility Greatly increased (3%) Increased (8%) Stayed about the same (36%) Decreased (4%) Greatly decreased (2%) No basis to judge/Not applicable (47%) b. Patrol your facility Greatly increased (2%) Increased (6%) Stayed about the same (27%) Decreased (6%) Greatly decreased (2%) No basis to judge/Not applicable (57%) c. Provide crime prevention and security trainings for tenant Greatly increased (3%) Increased (7%) Stayed about the same (27%) Decreased (5%) Greatly decreased (2%) No basis to judge/Not applicable (56%) 20. How satisfied are you with FPS’s current level of communication with respect to the following? (Select one for each row.) a. Services FPS can offer tenant agencies, such as guidance on security issues and crime prevention training Very satisfied (7%) Satisfied (26%) Neutral (20%) Dissatisfied (9%) Very dissatisfied (4%) No basis to judge/Not applicable (35%) b. Information related Building Security Assessments and Very satisfied (8%) Satisfied (30%) Neutral (21%) Dissatisfied (8%) Very dissatisfied (3%) No basis to judge/Not applicable (30%) c. Threats to your facility Very satisfied (8%) Satisfied (28%) Neutral (19%) Dissatisfied (7%) Very dissatisfied (3%) No basis to judge/Not applicable (35%) d. Security related laws, regulations, and guidance Very satisfied (6%) Satisfied (25%) Neutral (23%) Dissatisfied (6%) Very dissatisfied (3%) No basis to judge/Not applicable (37%) e. Information related to the security guards at your facility Very satisfied (6%) Satisfied (19%) Neutral (17%) Dissatisfied (6%) Very dissatisfied (3%) No basis to judge/Not applicable (50%) Very satisfied (7%) Satisfied (31%) Neutral (24%) Dissatisfied (6%) Very dissatisfied (3%) No basis to judge/Not applicable (29%) 21. Over the past 5-years, how has the level of communication with FPS changed with respect to the following? (Select one for each row.) a. Services FPS can offer tenant agencies, such as guidance on security issues and crime prevention training Greatly increased (3%) Increased (14%) Stayed about the same (34%) Decreased (8%) Greatly decreased (2%) No basis to judge/Not applicable (39%) b. Information related Building Security Assessments and Greatly increased (3%) Increased (16%) Stayed about the same (36%) Decreased (7%) Greatly decreased (2%) No basis to judge/Not applicable (37%) c. Threats to your facility Greatly increased (3%) Increased (10%) Stayed about the same (38%) Decreased (5%) Greatly decreased (1%) No basis to judge/Not applicable (43%) d. Security related laws, regulations, and guidance Greatly increased (2%) Increased (9%) Stayed about the same (38%) Decreased (6%) Greatly decreased (1%) No basis to judge/Not applicable (44%) e. Information related to the security guards at your facility Greatly increased (2%) Increased (8%) Stayed about the same (30%) Decreased (5%) Greatly decreased (2%) No basis to judge/Not applicable (53%) Greatly increased (3%) Increased (13%) Stayed about the same (39%) Decreased (5%) Greatly decreased (1%) No basis to judge/Not applicable (38%) 22. Based on your experience, what, if any, were the main actions FPS took over the last 5-years that contributed to the change in quality of service during this period? Checked Not checked 65%35% 64%36% 89%11% 87%13% 29. For the most recent BSA conducted at _____, was the designated official/BSC Chairperson interviewed by the FPS inspector about security concerns or security posture for your facility? (Select one.) 1. Yes (83%) 2. No (9%) 3. No basis to judge/Not applicable (8%) 30. For the most recent BSA conducted at _____, how satisfied were you with the level of interaction you had with FPS on the BSA? (Select one.) 1. Very satisfied (37%) 2. Satisfied (38%) 3. Neutral (15%) 4. Dissatisfied (3%) 5. Very dissatisfied (2%) 6. No basis to judge/Not applicable (5%) 31. Thinking back to the most recent BSA conducted by FPS at _____, were you/your BSC briefed by FPS on the BSA results? (Select one.) 1. Yes (82%) 2. No - Skip to question 34. (18%) 32. Thinking back to the most recent presentation of BSA results by FPS at _____, how satisfied were you with the FPS inspector’s overall presentation of the BSA results and recommendations? (Select one.) 1. Very satisfied (40%) 2. Satisfied (44%) 3. Neutral (12%) 4. Dissatisfied (2%) 5. Very dissatisfied (1%) 6. No basis to judge/Not applicable (1%) 33. Thinking back to the most recent presentation of BSA results by FPS at _____, how strongly do you agree or disagree with each of the following statements: (Select one for each row.) a. The FPS inspector was knowledgeable about physical security standards, regulations, and guidelines. Strongly agree (39%) Agree (49%) Neither agree nor disagree (7%) Disagree (1%) Strongly disagree (0%) No basis to judge/Not applicable (4%) b. The FPS inspector provided useful information on the BSA process, including information about threats to the facility and how these threats are tied to the recommended countermeasures. Strongly agree (30%) Agree (43%) Neither agree nor disagree (15%) Disagree (4%) Strongly disagree (1%) No basis to judge/Not applicable (7%) c. The FPS inspector provided useful information on various security countermeasures, including alternatives to recommended countermeasures. Strongly agree (30%) Agree (39%) Neither agree nor disagree (16%) Disagree (5%) Strongly disagree (1%) No basis to judge/Not applicable (8%) d. The FPS inspector provided cost estimates for various security countermeasures. Strongly agree (12%) Agree (21%) Neither agree nor disagree (17%) Disagree (11%) Strongly disagree (3%) No basis to judge/Not applicable (35%) e. The FPS inspector took into consideration the budget cycle(s) of tenant agency(s). Strongly agree (8%) Agree (18%) Neither agree nor disagree (25%) Disagree (6%) Strongly disagree (2%) No basis to judge/Not applicable (40%) f. The FPS inspector sufficiently responded to questions. Strongly agree (32%) Agree (48%) Neither agree nor disagree (10%) Disagree (1%) Strongly disagree (2%) No basis to judge/Not applicable (7%) 34. Thinking back to the most recent presentation of BSA results by FPS at __________, to what extent did FPS prioritize recommended security countermeasures? 35. If you have any comments that on the BSA process or would like to expand on your responses to questions Q26-34, please enter them in the space provided below 36. If you have completed the survey, please check the “Completed” circle below. Clicking “Completed” lets us know that you are finished and that you want us to use your answers. Your answers will not be used unless you have selected the Completed” option to this question. (Select one.) 1. Completed 2. Not completed If you would like to view and print your completed survey, continue to the next screen. Otherwise click on the Exit button below to exit the survey and send your responses to GAO’s server. Thank you! In addition to the contact named above, Tammy Conquest, Assistant Director; Tida Barakat; Brandon Haller; Delwen Jones; Steven Lozano; Susan Michal-Smith; Josh Ormond; Mark Ramage; Kelly Rubin; Lacy Vong; and Greg Wilmoth made key contributions to this report.
The Federal Protective Service (FPS), as part of the Department of Homeland Security (DHS) is responsible for providing security services to about 9,000 federal facilities. In recent years, FPS downsized its workforce from 1,400 to about 1,000 full-time employees. In 2008, GAO expressed concerns about the impact that downsizing had on FPS's mission, and in fiscal years 2008 and 2009 Congress mandated FPS maintain no fewer than 1,200 employees. GAO was asked to determine the extent to which (1) FPS has hired and trained new staff to address its mandated staffing levels, (2) FPS has developed a strategic human capital plan to manage its current and future workforce needs, and (3) FPS's customers are satisfied with the services it provides. To address these objectives, we reviewed relevant laws and documents, interviewed officials from FPS and other federal agencies, and conducted a generalizable survey of FPS's customers. FPS did not meet its fiscal year 2008 mandated deadline of increasing its staffing level to no fewer than 1,200 full-time employees by July 31, 2008. This same mandate relating to FPS's staffing was included in DHS's fiscal year 2009 appropriations act. Although FPS currently has over 1,200 employees on board, it did not meet this mandate until April 2009, because of challenges in shifting its priorities from downsizing its workforce to increasing it, inexperience working with DHS's hiring processes, and delays in the candidate screening process. Also, not all of FPS's new law enforcement security officers have completed all required training. According to FPS officials, it expects to have all new hires fully trained by September 2009. FPS does not have a strategic human capital plan to guide its current and future workforce planning efforts, including effective processes for training, retention, and staff development. Instead, FPS has developed a short-term hiring plan that does not include key human capital principles, such as determining an agency's optimum staffing needs. The lack of a human capital plan has contributed to inconsistent approaches in how FPS regions and headquarters are managing human capital activities. For example, FPS officials in some of the regions GAO visited said they implement their own procedures for managing their workforce, including processes for performance feedback, training, and mentoring. Additionally, FPS does not collect data on its workforce's knowledge, skills, and abilities. These elements are necessary for successful workforce planning activities, such as identifying and filling skill gaps and succession planning. FPS is working on developing and implementing a data management system that will provide it with these data, but this system has experienced significant delays and will not be available for use until 2011 at the earliest. On the basis of GAO's generalizable survey of FPS customers, customers had mixed views about some of the services they pay FPS to provide. Survey results showed that 58 percent were satisfied, 7 percent were dissatisfied, 18 percent were neutral, and 17 percent were not able to comment on FPS's overall services. The survey also showed that many of FPS's customers did not rely on FPS for services. For example, in emergency situations, about 82 percent of FPS's customers primarily rely on other agencies such as local law enforcement, while 18 percent rely on FPS. The survey also suggests that the roles and responsibilities of FPS and its customers are unclear, primarily because on average about one-third of FPS's customers, i.e., tenant agencies, could not comment on how satisfied or dissatisfied they were with FPS's level of communication on its services, partly because they had little to no interaction with FPS officers. Although FPS plans to implement education and outreach initiatives to improve customer service, it will face challenges because of its lack of complete and accurate contact data. Complete and accurate contact information for its customers is critical for information sharing and an essential component of any customer service initiative.
The disposal of LLRW is the end of the radioactive material lifecycle that spans production, use, processing, interim storage, and disposal. The nuclear utility industry generates the bulk of this LLRW through the normal operation and maintenance of nuclear power plants, and through the decommissioning of these plants. Other LLRW is generated from medical, industrial, agricultural, and research applications. Common uses of radioactive material are in radiotherapy, radiography, smoke detectors, irradiation and sterilization of food and materials, measuring devices, and illumination of emergency exit signs. In the course of working with these radioactive materials, other material, such as protective clothing and gloves, pipes, filters, and concrete, that come in contact with them will become contaminated and therefore need to be disposed of as LLRW. In the 1960s, the Atomic Energy Commission, a predecessor agency to DOE, began to encourage the development of commercial LLRW disposal facilities to accommodate the increased volume of commercial waste that was being generated. Six such disposal facilities were licensed, two of which, the Richland facility, licensed in 1965, and the Barnwell facility, licensed in 1969, remain today. Each of these facilities is located within the boundaries of or adjacent to a much larger site owned by DOE. The third facility, in Clive, Utah, operated by EnergySolutions (formerly known as Envirocare of Utah), was originally licensed by the state of Utah in 1988 to only accept naturally occurring radioactive waste. In 1991, Utah amended the facility’s license to permit the disposal of some LLRW, and the Northwest Compact agreed to allow the facility to accept these wastes from noncompact states. By 2001, the facility was allowed to accept all types of class A waste. At this time, sufficient available disposal capacity exists for almost all LLRW. However, fast-approaching constraints on the availability of disposal capacity for class B and class C wastes could adversely affect the disposal of many states’ LLRW. Specifically, beginning on June 30, 2008, waste generators in 36 states will be precluded from using the Barnwell disposal facility for their class B and class C LLRW. That facility currently accepts about 99 percent of the nation’s class B and class C commercial LLRW. Although the Barnwell and Richland facilities have more than sufficient capacity to serve waste generators from the 14 states that are members of the facilities’ respective compacts until at least 2050, the remaining 36 states will have no disposal options for their class B and class C LLRW. Although waste generators in these 36 states will no longer have access to Barnwell, they can continue to minimize waste generation, process waste into safer forms, and store waste pending the development of additional disposal options. While NRC prefers the disposal of LLRW, it allows on- site storage as long as the waste remains safe and secure. Since September 11, 2001, both the public’s concern with, and its perception of, risk associated with radioactive release, including that from stored LLRW, have increased. However, should an immediate and serious threat come from any specific location of stored waste, NRC has the authority under the act to override any compact restrictions and allow shipment of the waste to a regional or other nonfederal disposal facility under narrowly defined conditions. Waste minimization techniques and storage can alleviate the need for disposal capacity, but they can be costly. For example, in June 2004 we reported that one university built a $12 million combined hazardous and radioactive waste management facility. Two-thirds of this facility is devoted to the processing and temporary storage of class A waste. Additional disposal capacity for the estimated 20,000 to 25,000 cubic feet of class B and class C LLRW disposed of annually at Barnwell may become available with the opening of a new disposal facility in Texas. This facility has received a draft license and appears to be on schedule to begin operations in 2010. Although the facility may accept some DOE cleanup waste, there is presently no indication that it will be made available to all waste generators beyond the two states that are members of the Texas Compact (Texas and Vermont). In contrast, available disposal capacity for the nation’s class A waste does not appear to be a problem in either the short or long term. Our June 2004 report noted that EnergySolutions’ Clive facility had sufficient disposal capacity, based upon then-projected disposal volumes, to accept class A waste for at least 20 years under its current license. This facility currently accepts about 99 percent of the nation’s class A LLRW. Since our report was issued, domestic class A waste has declined from about 15.5 million cubic feet in 2005 to about 5 million cubic feet in 2007. This decline is primarily attributed to DOE’s completion of several cleanup projects. DOE waste constituted about 50 percent of the total waste accepted by EnergySolutions in 2007. This reduction in projected class A disposal volumes will extend the amount of time the Clive facility can accept class A waste before exhausting its capacity. According to the disposal operator, capacity for this facility has been extended another 13 years, to 33 years of capacity. It is important to note, however, that our June 2004 analysis of available LLRW disposal capacity considered only domestically produced LLRW. We did not consider the impact of imported LLRW on available class A, B, and C disposal capacity at Clive, Barnwell, and Richland. Although disposal capacity at the time of our June 2004 report appeared adequate using then- projected waste disposal volumes, the impact of adding additional waste from overseas waste generators is unclear. While none of the foreign countries we surveyed for our March 2007 report indicated that they have disposal options for all of their LLRW, almost all either had disposal capacity for their lower-activity LLRW or central storage facilities for their higher-activity LLRW, pending the availability of disposal capacity. Specifically, we surveyed 18 foreign countries that previously had or currently have operating nuclear power plants or research reactors. Ten of the 18 countries reported having available disposal capacity for their lower-activity LLRW and 6 other countries have plans to build such facilities. Only 3 countries indicated that they have a disposal option for some higher-activity LLRW. Many countries that lack disposal capacity for LLRW provide centralized storage facilities to relieve waste generators of the need to store LLRW on-site. Specifically, 7 of the 8 countries without disposal facilities for lower-activity LLRW had centralized storage facilities. Eleven of the 15 countries without disposal facilities for at least some higher-activity LLRW provide central storage facilities for this material. Of the 18 countries we surveyed, only Italy indicated that it lacked disposal availability for both lower- and higher-activity LLRW and central storage facilities for this waste. As reported by Italy to the international Nuclear Energy Agency, in 1999, the government began to develop a strategy for managing the liabilities resulting from the country’s past national nuclear activities. The strategy established a new national company to shut down all of Italy’s nuclear power plants and to promptly decommission them. It also created a national agency that would establish and operate a disposal site for radioactive waste. A subsequent government decree in 2001 prompted an acceleration of the process to select a disposal site, with the site to begin operations in 2010. However, the Italian government has more recently reported it has encountered substantial difficulties establishing a disposal site because local governments have rejected potential site locations. In total, Italy will have an estimated 1.1 million cubic feet of lower-activity LLRW that will result from decommissioning its nuclear facilities in addition to the 829,000 cubic feet of this waste already in storage. Our March 2007 report identified several management approaches used in foreign countries that, if adopted in the United States, could improve the management of radioactive waste. These approaches included, among other things, using a comprehensive national radioactive waste inventory of all types of radioactive waste by volume, location, and waste generator; providing disposition options for all types of LLRW or providing central storage options for higher-radioactivity LLRW if disposal options are unavailable; and developing financial assurance requirements for all waste generators to reduce government disposition costs. We also identified another management approach used in most countries—national radioactive waste management plans—that also might provide lessons for managing U.S. radioactive waste. Currently, the United States does not have a national radioactive waste management plan and does not have a single federal agency or other organization responsible for coordinating LLRW stakeholder groups to develop such a plan. Such a plan for the United States could integrate the various radioactive waste management programs at the federal and state levels into a single source document. Our March 2007 report recommended that DOE and NRC evaluate and report to the Congress on the usefulness of adopting the LLRW management approaches used in foreign countries and developing a U.S. radioactive waste management plan. Although both agencies generally agreed with our recommendations, NRC, on behalf of itself and DOE, subsequently rejected two approaches that our March 2007 report discussed. Specifically, NRC believes that the development of national LLRW inventories and a national waste management plan would be of limited use in the United States. In a March 2008 letter to GAO on the actions NRC has taken in response to GAO’s recommendations, NRC stated that the approach used in the United States is fundamentally different from other countries. In particular, NRC argued that, because responsibility for LLRW disposal is placed with the states, the federal government’s role in developing options for managing and/or disposing of LLRW is limited. NRC also expressed concern about the usefulness and significant resources required to develop and implement national inventories and management plans. We continue to believe comprehensive inventories and a national plan would be useful. A comprehensive national radioactive waste inventory would allow LLRW stakeholders to forecast waste volumes and to plan for future disposal capacity requirements. Moreover, a national radioactive waste management plan could assist those interested in radioactive waste management to identify waste quantities and locations, plan for future storage and disposal development, identify research and development opportunities, and assess the need for regulatory or legislative actions. For example, there are no national contingency plans, other than allowing LLRW storage at waste generator sites, to address the impending closure of the Barnwell facility to class B and class C LLRW from noncompact states. The availability of a national plan and periodic reporting on waste conditions might also provide the Congress and the public with a more accessible means for monitoring the management of radioactive waste and provide a mechanism to build greater public trust in the management of these wastes in the United States. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Gene Aloise at (202) 512- 3841 or aloisee@gao.gov. Major contributors to this statement were Daniel Feehan (Assistant Director), Thomas Laetz, Lesley Rinner, and Carol Herrnstadt Shulman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Disposal of radioactive material continues to be highly controversial. To address part of the disposal problem, in 1980, Congress made the states responsible for disposing of most low-level radioactive waste (LLRW), and allowed them to form regional compacts and to restrict access to disposal facilities from noncompact states. LLRW is an inevitable by-product of nuclear power generation and includes debris and contaminated soils from the decommissioning and cleanup of nuclear facilities, as well as metal and other material exposed to radioactivity. The Nuclear Regulatory Commission (NRC) ranks LLRW according to hazard exposure--classes A, B, C, and greater-than-class C (GTCC). The states are responsible for the first three classes, and the Department of Energy (DOE) is responsible for GTCC. Three facilities dispose of the nation's LLRW--in Utah, South Carolina, and Washington State. The testimony addresses (1) LLRW management in the United States and (2) LLRW management in other countries. It is substantially based on two GAO reports: a June 2004 report (GAO-04-604) and a March 2007, report (GAO-07-221) that examined these issues. To prepare this testimony, GAO relied on data from the two reports and updated information on current capacity for LLRW and access to disposal facilities. As GAO reported in 2004, existing disposal facilities had adequate capacity for most LLRW and were accessible to waste generators (hereafter referred to as disposal availability) in the short term, but constraints on the disposal of certain types of LLRW warranted concern. Specifically, South Carolina had decided to restrict access to its disposal facility by mid-2008 for class B and C waste--the facility now accepts about 99 percent of this waste generated nationwide--to only waste generators in the three states of its compact. If there are no new disposal options for class B and C wastes after 2008, licensed users of radioactive materials can continue to minimize waste generation, process waste into safer forms, and store waste pending the development of additional disposal options. While NRC prefers that LLRW be disposed of, it allows on-site storage as long as the waste remains safe and secure. In contrast, disposal availability for domestic class A waste is not a problem in the short or longer term. In 2004, GAO reported that the Utah disposal facility--which accepts about 99 percent of this waste generated nationwide--could accept such waste for 20 years or more under its current license based on anticipated class A waste volumes. Since 2005, the volume of class A waste disposed of has declined by two-thirds primarily because DOE completed several large cleanup projects, extending the capacity for an additional 13 years, for a total of 33 years of remaining disposal capacity. However, the June 2004 analysis, and the updated analysis, were based on the generation of LLRW only in the United States and did not consider the impact on domestic disposal capacity of importing foreign countries' LLRW. Ten of the 18 countries surveyed for GAO's March 2007 report have disposal options for class A, B and most of C waste, and 6 other countries have plans to build such facilities. Only 3 countries indicated that they have a disposal option for some class C and GTCC waste; however, almost all countries that do not provide disposal for LLRW have centralized storage facilities for this waste. Only Italy reported that it had no disposal or central storage facilities for its LLRW, although it plans to develop a disposal site for this waste that will include waste from its decommissioned nuclear power plants and from other nuclear processing facilities. Italy initially expected this disposal site to be operational by 2010, but local governments' resistance to the location of this disposal site has delayed this date. The March 2007 report also identified a number of LLRW management approaches used in other countries that may provide lessons to improve the management of U.S. radioactive waste. These approaches include the use of comprehensive national radioactive waste inventory databases and the development of a national radioactive waste management plan. Such a plan would specify a single entity responsible for coordinating radioactive waste management and include strategies to address all types of radioactive waste. GAO had recommended that NRC and DOE evaluate and report to the Congress on the usefulness of these approaches. While the agencies considered these approaches, they expressed particular concerns about the significant resources required to develop and implement a national inventory and management plan for LLRW.
OPM and agencies are continuing to address the problems with the key parts of the hiring process we identified in our May 2003 report. Significant issues and actions being taken include the following. Reforming the classification system. In our May 2003 report on hiring, we noted that many regard the standards and process for defining a job and determining pay in the federal government as a key hiring problem because they are inflexible, outdated, and not applicable to the jobs of today. The process of job classification is important because it helps to categorize jobs or positions according to the kind of work done, the level of difficulty and responsibility, and the qualifications required for the position, and serves as a building block to determine the pay for the position. As you know, defining a job and setting pay in the federal government has generally been based on the standards in the Classification Act of 1949, which sets out the 15 grade levels of the General Schedule system. To aid agencies in dealing with the rigidity of the federal classification system, OPM has revised the classification standards of several job series to make them clearer and more relevant to current job duties and responsibilities. In addition, as part of the effort to create a new personnel system for the Department of Homeland Security (DHS), OPM is working with DHS to create broad pay bands for the department in place of the 15- grade job classification system that is required for much of the federal civil service. Still, OPM told us that its ability to more effectively reform the classification process is limited under current law and that legislation is needed to modify the current restrictive classification process for the majority of federal agencies. As we note in the report we are issuing today, 15 of the 22 CHCO Council members responding to our recent survey reported that either OPM (10 respondents) or Congress (5 respondents) should take the lead on reforming the classification process, rather than the agencies themselves. Improving job announcements and Web postings. We pointed out in our May 2003 report that the lack of clear and appealing content in federal job announcements could hamper or delay the hiring process. Our previous report provided information about how some federal job announcements were lengthy and difficult to read, contained jargon and acronyms, and appeared to be written for people already employed by the government. Clearly, making vacancy announcements more visually appealing, informative, and easy to access and navigate could make them more effective as recruiting tools. To give support to this effort, OPM has continued to move forward on its interagency project to modernize federal job vacancy announcements, including providing guidance to agencies to improve the announcements. OPM continues to collaborate with agencies in implementing Recruitment One-Stop, an electronic government initiative that includes the USAJOBS Web site (www.usajobs.opm.gov) to assist applicants in finding employment with the federal government. As we show in the report we are issuing today, all 22 of the CHCO Council members responding to our recent survey indicated that their agencies had made efforts to improve their job announcements and Web postings. In the narrative responses to our survey, a CHCO Council member representing a major department said, for example, that the USAJOBS Web site is an excellent source for posting vacancies and attracting candidates. Another Council member said that the Recruitment One-Stop initiative was very timely in developing a single automated application for job candidates. Automating hiring processes. Our May 2003 report also emphasized that manual processes for rating and ranking job candidates are time consuming and can delay the hiring process. As we mentioned in our previous report, the use of automation for agency hiring processes has various potential benefits, including eliminating the need for volumes of paper records, allowing fewer individuals to review and process job applications, and reducing the overall time-to-hire. In addition, automated systems typically create records of actions taken so that managers and human capital staff can easily document their decisions related to hiring. To help in these efforts, OPM provides to agencies on a contract or fee-for- service basis an automated hiring system, called USA Staffing, which is a Web-enabled software program that automates the steps of the hiring process. These automated steps would include efforts to recruit candidates, use of automated tools to assess candidates, automatic referral of high-quality candidates to selecting officials, and electronic notification of applicants on their status in the hiring process. According to OPM, over 40 federal organizations have contracted with OPM to use USA Staffing. OPM told us that it has developed and will soon implement a new Web- based version of USA Staffing that could further link and automate agency hiring processes. As we mention in the report we are issuing today, 21 of the 22 CHCO Council members responding to our recent survey reported that their agencies had made efforts to automate significant parts of their hiring processes. Improving candidate assessment tools. We concluded in our May 2003 report that key candidate assessment tools used in the federal hiring process can be ineffective. Our previous report noted that using the right assessment tool, or combination of tools, can assist the agency in predicting the relative success of each applicant on the job and selecting the relatively best person for the job. These candidate assessment tools can include written and performance tests, manual and automated techniques to review each applicant’s training and experience, as well as interviewing approaches and reference checks. In our previous report, we noted some of the challenges of assessment tools and special hiring programs used for occupations covered by the Luevano consent decree. Although OPM officials said they monitor the use of assessment tools related to positions covered under the Luevano consent decree, they have not reevaluated these assessment tools. OPM officials told us, however, that they have provided assessment tools or helped develop new assessment tools related to various occupations for several agencies on a fee-for-service basis. Although OPM officials acknowledged that candidate assessment tools in general need to be reviewed, they also told us that it is each agency’s responsibility to determine what tools it needs to assess job candidates. The OPM officials also said that if agencies do not want to develop their own assessment tools, then they could request that OPM help develop such tools under the reimbursable service program that OPM operates. As we state in the report we are issuing today, 21 of the 22 CHCO Council members responding to our recent survey indicated that their agencies had made efforts to improve their hiring assessment tools. Although we agree that OPM has provided assistance to agencies in improving their candidate assessment tools and has collected information on agencies’ use of special hiring authorities, we believe that major challenges remain in this area. OPM can take further action to address our prior recommendations related to assessment tools. OPM could, for example, actively work to link up agencies having similar occupations so that they could potentially form consortia to develop more reliable and valid tools to assess their job candidates. Despite agency officials’ past calls for hiring reform, agencies appear to be making limited use of category rating and direct-hire authority, two new hiring flexibilities created by Congress in November 2002 and implemented by OPM in June of last year. Data on the actual use of these two new flexibilities are not readily available, but most CHCO Council members responding to our recent survey indicated that their agencies are making little or no use of either flexibility (see fig. 1). OPM officials also confirmed with us that based on their contacts and communications with agencies, it appeared that the agencies were making limited use of the new hiring flexibilities. The limited use of category rating is somewhat unexpected given the views of human resources directors we interviewed 2 years ago. As noted in our May 2003 report, many agency human resources directors indicated that numerical rating and the rule of three were key obstacles in the hiring process. Category rating was authorized to address those concerns. The report we are issuing today also includes information about barriers that the CHCO Council members believed have prevented or hindered their agencies from using or making greater use of category rating and direct hire. Indeed, all but one of the 22 CHCO Council members responding to our recent survey identified at least one barrier to using the new hiring flexibilities. Frequently cited barriers included the lack of OPM guidance for using the flexibilities, the lack of agency policies and procedures for using the flexibilities, the lack of flexibility in OPM rules and regulations, and concern about possible inconsistencies in the implementation of the flexibilities within the department or agency. In a separate report we issued in May 2003 on the use of human capital flexibilities, we recommended that OPM work with and through the new CHCO Council to more thoroughly research, compile, and analyze information on the effective and innovative use of human capital flexibilities. We noted that sharing information about when, where, and how the broad range of personnel flexibilities is being used, and should be used, could help agencies meet their human capital management challenges. As we recently testified, OPM and agencies need to continue to work together to improve the hiring process, and the CHCO Council should be a key vehicle for this needed collaboration. To accomplish this effort, agencies need to provide OPM with timely and comprehensive information about their experiences in using various approaches and flexibilities to improve their hiring processes. OPM—working through the CHCO Council—can, in turn, help by serving as a facilitator in the collection and exchange of information about agencies’ effective practices and successful approaches to improved hiring. Such additional collaboration between OPM and agencies could go a long way to helping the government as a whole and individual agencies in improving the processes for quickly hiring highly qualified candidates to fill important federal jobs. In conclusion, the federal government is now facing one of the most transformational changes to the civil service in half a century, which is reflected in the new personnel systems for DHS and the Department of Defense and in new hiring flexibilities provided to all agencies. Today’s challenge is to define the appropriate roles and day-to-day working relationships for OPM and individual agencies as they collaborate on developing innovative and more effective hiring systems. Moreover, for this transformation to be successful and enduring, human capital expertise within the agencies must be up to the challenge. Madam Chairwoman and Mr. Davis, this completes my statement. I would be pleased to respond to any questions that you might have. For further information on this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, (202) 512-6806 or at mihmj@gao.gov. Individuals making key contributions to this testimony include K. Scott Derrick, Karin Fangman, Stephanie M. Herrold, Trina Lewis, John Ripper, Edward Stephenson, and Monica L. Wolford. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The executive branch hired nearly 95,000 new employees during fiscal year 2003. Improving the federal hiring process is critical given the increasing number of new hires expected in the next few years. In May 2003, GAO issued a report highlighting several key problems in the federal hiring process. That report concluded that the process needed improvement and included several recommendations to address the problems. Today, GAO is releasing a followup report requested by the subcommittee that discusses (1) the status of recent efforts to help improve the federal hiring process and (2) the extent to which federal agencies are using two new hiring flexibilities--category rating and direct-hire authority. Category rating permits an agency manager to select any job candidate placed in a best-qualified category. Direct-hire authority allows an agency to appoint individuals to positions without adherence to certain competitive examination requirements when there is a severe shortage of qualified candidates or a critical hiring need. Congress, the Office of Personnel Management (OPM), and agencies have all taken steps to improve the federal hiring process. In particular, Congress has provided agencies with additional hiring flexibilities, OPM has taken significant steps to modernize job vacancy announcements and develop the government's recruiting Web site, and most agencies are continuing to automate parts of their hiring processes. Nonetheless, problems remain with a job classification process and standards that many view as antiquated, and there is a need for improved tools to assess the qualifications of job candidates. Specifically, the report being released today discusses significant issues and actions being taken to (1) reform the classification system, (2) improve job announcements and Web postings, (3) automate hiring processes, and (4) improve candidate assessment tools. In addition, agencies appear to be making limited use of the two new hiring flexibilities contained in the Homeland Security Act of 2002--category rating and direct-hire authority--that could help agencies in expediting and controlling their hiring processes. GAO surveyed members of the interagency Chief Human Capital Officers Council who reported several barriers to greater use of these new flexibilities. Frequently cited barriers included (1) the lack of OPM guidance for using the flexibilities, (2) the lack of agency policies and procedures for using the flexibilities, (3) the lack of flexibility in OPM rules and regulations, and (4) concern about possible inconsistencies in the implementation of the flexibilities within the department or agency. The federal government is now facing one of the most transformational changes to the civil service in half a century, which is reflected in the new personnel systems for Department of Homeland Security and the Department of Defense and in new hiring flexibilities provided to all agencies. Today's challenge is to define the appropriate roles and day-to-day working relationships for OPM and individual agencies as they collaborate on developing innovative and more effective hiring systems. Moreover, human capital expertise within the agencies must be up to the challenge for this transformation to be successful and enduring.
Following the terrorist attacks of September 11, 2001, the Aviation and Transportation Security Act (ATSA) was enacted in November 2001 and required TSA to work with airport operators to strengthen access controls to secure areas, and to consider using biometric access control systems, or similar technologies, to verify the identity of individuals who seek to enter a secure airport area. In response, TSA established the TWIC program in December 2001. TWIC was originally envisioned as a nationwide transportation worker identity solution to be used by approximately 6 million credential holders across all modes of transportation, including seaports, airports, rail, pipeline, trucking, and mass transit facilities. In November 2002, MTSA further required DHS to issue a maritime worker identification card that uses biometrics to control access to secure areas of maritime transportation facilities and vessels. TSA and USCG decided to implement TWIC initially in the maritime domain. Other transportation modes such as aviation have a preference for site-specific credentials. As defined by DHS, and consistent with the requirements of MTSA, the purpose of the TWIC program is to design and field a common biometric credential for all transportation workers across the United States who require unescorted access to secure areas at MTSA-regulated maritime facilities and vessels. As stated in the TWIC mission needs statement, the TWIC program aims to meet the following mission needs: positively identify authorized individuals who require unescorted access to secure areas of the nation’s transportation system, determine the eligibility of individuals to be authorized unescorted access to secure areas of the transportation system by conducting a security threat assessment, ensure that unauthorized individuals are not able to defeat or otherwise compromise the access system in order to be granted permissions that have been assigned to an authorized individual, and identify individuals who fail to maintain their eligibility requirements subsequent to being permitted unescorted access to secure areas of the nation’s transportation system and immediately revoke the individual’s permissions. In 2005, TSA conducted an analysis of alternatives and a cost-benefit analysis to identify possible options for addressing MTSA’s requirement to develop a biometric transportation security card that would also meet the related mission needs specified above. On the basis of these analyses, TSA determined that the best alternative was for the federal government to issue a single biometric credential that could be used across all vessels and maritime facilities, and for the government to manage all aspects of the credentialing process—enrollment, card issuance, and card revocation. TSA considered an alternative option based on a more decentralized and locally managed approach wherein MTSA-regulated facilities, vessels, and other port-related entities could issue their own credentials after individuals passed a TSA security threat assessment, but ultimately rejected the option (additional details are provided later in this report). Transportation Security Administration. Transportation Worker Identification Credential (TWIC) Program Analysis of Alternatives Version 2.0. Feb. 15, 2005, and Transportation Worker Identification Credential (TWIC) Program Cost Benefit Analysis, Version 1.0. Aug. 31, 2005. environment.contractor’s report identified problems with the report, such as inaccurate and missing information. As a result, the independent assessment recommended that TSA not rely on the contractor’s final report on the TWIC prototype when making future decisions about the implementation of TWIC. We found that an independent assessment of the testing In 2006, the SAFE Port Act amended MTSA and directed the Secretary of Homeland Security to, among other things, implement a TWIC reader pilot to test the technology and operational impacts of deploying card readers at maritime facilities and vessels. August 2008. This pilot was conducted with the voluntary participation of maritime port, facility, and vessel operators at 17 sites within the United States. In November 2009, we reported on the TWIC reader pilot design and planned approach, and found that DHS did not have a sound evaluation approach to ensure information collected through the TWIC reader pilot would be complete, accurate, and representative of deployment conditions. Among other things, we recommended that an evaluation plan and data analysis plan be developed to guide the remainder of the pilot and to identify how DHS would compensate for areas where the TWIC reader pilot would not provide the information needed to report to Congress and implement the TWIC card reader rule. DHS concurred with this recommendation. The status of TSA’s efforts to develop these plans is discussed later in this report. In addition, the Coast Guard Authorization Act of 2010 required that the findings of the pilot be included in a report to Congress, and that we assess the reported findings and recommendations. Pub. L. No 109-347, § 104(a), 120 Stat. 1884, 1888 (codified at 46 U.S.C. § 70105(k)). In May 2011, we reported that internal control weaknesses governing the enrollment, background checking, and use of TWIC potentially limited the program’s ability to provide reasonable assurance that access to secure areas of MTSA-regulated facilities is restricted to qualified individuals. We also reported that DHS had not assessed the TWIC program’s effectiveness at enhancing security or reducing risk for MTSA-regulated facilities and vessels. Further, we reported that DHS had not conducted a risk-informed cost-benefit analysis that considered existing security risks. We recommended, among other things, that DHS (1) assess TWIC program internal controls to identify needed corrective actions; (2) assess the TWIC program’s effectiveness; and (3) use the information from the assessment as the basis for evaluating the costs, benefits, security risks, and corrective actions needed to implement the TWIC program in a manner that will meet program objectives and mitigate existing security risks. DHS concurred with our recommendations and has taken steps to assess TWIC program internal controls. Appendix II summarizes key activities in the implementation of the TWIC program. Over $23 million had been made available to pilot participants from two Federal Emergency Management Agency (FEMA) grant programs—the Port Security Grant Program and the Transit Security Grant Program. Of the $23 million, grant recipients agreed to spend nearly $15 million on the TWIC reader pilot. However, DHS is unable to validate the exact amount grant recipients spent on the TWIC reader pilot, as rules for allocating what costs would be included as TWIC reader pilot costs versus other allowable grant expenditures were not defined. Sixteen of the 17 participating pilot sites were funded using these grants. In addition, TSA obligated an additional $8.1 million of appropriated funds to support the pilot. proposed rulemaking published on March 22, 2013, estimated an additional cost of $234.2 million (undiscounted) to implement readers at 570 facilities and vessels that the TWIC reader currently targets. However, USCG does not rule out expanding reader requirements in the future. Appendix III contains additional program funding details. The TWIC reader pilot was intended to test the technology, business processes, and operational impacts of deploying TWIC readers at secure areas of the marine transportation system. Accordingly, the pilot was to test the viability of using selected biometric card readers to read TWICs within the maritime environment. It was also to test the technical aspects of connecting TWIC readers to access control systems. The results of the pilot are to inform the development of a proposed rule requiring the use of electronic card readers with TWICs at MTSA-regulated vessels and facilities. To conduct the TWIC reader pilot, TSA contracted with the Navy’s Space and Naval Warfare Systems Command (SPAWAR) to serve as the independent test agent to plan, analyze, evaluate, and report on all test events. Furthermore, the Navy’s Naval Air Systems Command (NAVAIR) conducted environmental testing of select TWIC readers. In addition, TSA partnered with the maritime industry at 17 pilot sites distributed across seven geographic locations within the United States. See appendix IV for a complete listing of the pilot sites, locations, and types of maritime operation each represented. Levels of participation varied across the pilot sites. For example, at one facility, one pedestrian turnstile was tested out of 22 identified entry points. At another, the single vehicle gate was tested, but none of the seven pedestrian gates were tested. At a third facility with three pedestrian gates and 36 truck lanes, all three turnstiles and 2 truck lanes were tested. According to TSA, given the voluntary nature of the pilot, levels of participation varied across the pilot sites, and TSA could not dictate to the respective facilities and vessels specific and uniform requirements for testing. The TWIC reader pilot, as initially planned, was to consist of three sequential assessments, with the results of each assessment intended to inform the subsequent ones. Table 1 highlights key aspects of the three assessments. To address time and cost constraints related to using the results of the TWIC reader pilot to inform the TWIC card reader rule, two key changes were made to the pilot tests in 2008. First, TSA and USCG inserted an initial reader evaluation as the first step of the initial technical test. This evaluation was an initial assessment of each reader’s ability to read a TWIC. Initiated in August 2008, the initial reader evaluation resulted in a list of biometric card readers from which pilot participants could select for use in the pilot rather than waiting for the entire ITT to be completed. Further, the list of readers that passed the initial reader evaluation was used by TSA and USCG to help select a limited number of readers for full functional and environmental testing. Second, TSA did not require the TWIC reader pilot to be conducted in the sequence highlighted in table 1. Rather, pilot sites were allowed to conduct the early operational assessment and the system test and evaluation testing while ITT was under way. Various reports were produced to document the results of each TWIC reader pilot assessment. An overall report was produced to document the ITT results conducted prior to testing at pilot sites. To document the results of testing at each of the 17 pilot sites, the independent test agent produced one EOA report and one ST&E report for each site. These reports summarized information collected from each of the pilot sites and trip reports documenting the independent test agent’s observations during visits to pilot sites. On February 27, 2012, DHS conveyed the results of the TWIC reader pilot by submitting the TWIC Reader Pilot Program report to Congress. On March 22, 2013, USCG issued a notice of proposed rulemaking that would, if finalized, require owners and operators of certain MTSA- regulated vessels and facilities to use readers designed to work with TWICs. Challenges related to pilot planning, data collection, and reporting affect the completeness, accuracy, and reliability of the pilot test aimed at assessing the technology and operational impact of using TSA’s TWIC with card readers. Moreover, according to our review of the pilot and TSA’s past efforts to demonstrate the validity and security benefits of the TWIC program, the program’s premise and effectiveness in enhancing security are not supported. As we previously reported, TSA encountered challenges in its efforts to plan the TWIC reader pilot. In November 2009, we reviewed and reported on the TWIC reader pilot design and planned approach for collecting data at pilot sites. For example, we reported that the pilot test and evaluation documentation did not identify how individual pilot site designs and resulting variances in the information collected from each pilot site were to be assessed. This had implications for both the technology aspect of the pilot as well as the business and operational aspect. We further reported that pilot site test designs may not be representative of future plans for using TWIC because pilot participants were not necessarily using the technologies and approaches they intend to use in the future when TWIC readers are implemented at their sites.reported that there was a risk that the selected pilot sites and test methods would not result in the information needed to understand the impacts of TWIC nationwide. At the time, TSA officials told us that no specific unit of analysis, site selection criteria, or sampling methodology As a result, we was developed or documented prior to selecting the facilities and vessels to participate in the TWIC reader pilot. As a result of these challenges, we recommended that DHS, through TSA and USCG, develop an evaluation plan to guide the remainder of the pilot that includes (1) performance standards for measuring the business and operational impacts of using TWIC with biometric card readers, (2) a clearly articulated evaluation methodology, and (3) a data analysis plan. We also recommended that TSA and USCG identify how they will compensate for areas where the TWIC reader pilot will not provide the necessary information needed to report to Congress and inform the TWIC card reader rule. DHS concurred with these recommendations. While TSA developed a data analysis plan, TSA and USCG reported that they did not develop an evaluation plan with an evaluation methodology or performance standards, as we recommended. The data analysis plan was a positive step because it identified specific data elements to be captured from the pilot for comparison across pilot sites. If accurate data had been collected, adherence to the data analysis plan could have helped yield valid results. However, TSA and the independent test agent did not utilize the data analysis plan. According to officials from the independent test agent, they started to use the data analysis plan but stopped using the plan because they were experiencing difficulty in collecting the required data and TSA directed them to change the reporting approach. TSA officials stated that they directed the independent test agent to change its collection and reporting approach because of TSA’s inability to require or control data collection to the extent required to execute the data analysis plan. However, TSA and USCG did not fully identify how they would compensate for areas where the pilot did not provide the necessary information needed to report to Congress and inform the TWIC card reader rule. For example, such areas could include (1) testing to determine the impact of the business and operational processes put in place by a facility to handle those persons that are unable to match their live fingerprint to the fingerprint template stored on the TWIC and (2) requiring operators using a physical access control system in conjunction with a TWIC to identify how they are protecting personal identify information and testing how this protection affects the speed of processing TWICs. While USCG commissioned two studies to help compensate for areas where the TWIC reader pilot will not provide necessary information, the studies did not compensate for all of the challenges we identified in our November 2009 report. Such challenges included, for example, the impact of adding additional security protection on systems to prevent the disclosure of personal identity information and the related cost and processing implications. In addition, our review of the TWIC reader pilot approach as implemented since 2009 and resulting pilot data identified some technology issues that affected the reliability of the TWIC reader pilot data. As DHS noted in its report to Congress, successful implementation of TWIC readers includes the development of an effective system architecture and physical access control system and properly functioning TWIC cards, among other things. However, not all TWIC card readers used in the TWIC reader pilot underwent both environmental and functional tests in the laboratory prior to use at pilot sites as originally intended. Because of cost and time constraints, TSA officials instead conducted an initial evaluation of all readers included in the pilot to determine their ability to read a TWIC. These initial evaluations resulted in a list of 30 biometric TWIC card readers from which pilot participants could select a reader for use. However, of these 30 readers, 8 underwent functional testing and 5 underwent environmental testing. None of the TWIC card readers underwent and passed all tests. TSA and independent test agent summary test results note that ambiguities within the TWIC card reader specification—the documented requirements for what and how TWIC card readers are to function—may have led to different interpretations and caused failures of tested TWIC systems. According to TSA, the readers that underwent laboratory-based environmental and functional testing and were placed on the TSA list of acceptable readers did not have problems that would severely impact pilot site operations or prevent the collection of useful pilot data and therefore the readers were all available for use during the pilot. However, according to our review of the pilot documentation, TSA did not define what “severely impact” meant or performance thresholds for reader problems identified during laboratory-based environmental and functional testing that would severely impact pilot site operations or prevent the collection of useful pilot data. Further, according to TSA officials, TSA could not eliminate 1 of the readers that may have failed a test from the list of acceptable readers when other readers that had not been tested would be allowed on the list. According to TSA officials, doing so would have been an unfair disadvantage to the readers that were selected for the more rigorous laboratory-based environmental and functional testing. In addition, TSA did not provide pilot sites with the results of the laboratory-based environmental and functional testing. According to TSA, it signed confidentiality agreements with reader vendors, which prevented it from sharing this information. The results could have been used to help inform each pilot site’s selection of readers appropriate for its organization’s environmental and operational considerations. This may have hindered TSA’s efforts to determine if issues observed during the pilot were due to the TWIC, TWIC reader, or a facility’s access control system. Nonetheless, TSA determined that information collected during reader laboratory-based testing and at pilot sites was still useful for refining future TWIC reader specifications. In addition, while TWIC cards are intended for use in the harsh maritime environment, the finalized TWIC cards did not undergo durability testing prior to testing at pilot sites. TSA selected card stock that had been tested in accordance with defined standards. However, TSA did not conduct durability tests of the TWIC cards after they were personalized with security features, such as the TWIC holder’s picture, or laminated. According to TSA, technology reasons that may render a TWIC card damaged include, among others, breakage to the antenna or the antenna’s connection to the card’s computing chip. Without testing the durability of personalized TWIC cards, the likelihood that TWIC cards and added security features can withstand the harsh maritime environment is unknown. According to TWIC program officials, each TWIC is tested to ensure it functions prior to being issued to an individual. However, the finalized TWIC card was not tested for durability to ensure that it could withstand the harsh maritime environment because doing so would be costly; TWIC is a fee-funded program, and the officials believed it would be unfair to pass on the cost to further test TWICs to consumers. However, testing TWIC credentials to ensure they can withstand the harsh maritime environment may prove to be more cost-effective, as it could minimize the time lost at access points and the TWIC holder’s need to pay a $60 replacement fee if the TWIC were to fail. The importance of durability testing has been recognized by other government agencies and reported by GAO as a means to identify card failures before issuance. For example, the Department of Defense’s (DOD) common access card—also used in harsh environments such as Afghanistan and other areas with severe weather conditions—has, according to DOD officials, been tested after personalization to ensure that it remains functional and durable. DOD also assesses returned nonfunctioning common access cards to identify the potential cause of card failures. In addition, in June 2010, as part of our review of another credential program, we recommended that the Department of State fully test or evaluate the security features on its Border Crossing Cards, including any significant changes made to the cards’ physical construction, security features, or appearance during the development process. Thus, durability testing TWIC cards after personalization could have reduced the pervasiveness of problems encountered with malfunctioning TWIC cards during the pilot. As a result of the noted planning and preparation shortfalls, including (1) the absence of defined performance standards for measuring pilot performance, (2) variances in pilot site testing approaches without compensating measures to ensure complete and comparable data were collected, and (3) inadequate testing to ensure that piloted readers and TWICs worked as intended, the data TSA and the independent test agent collected on the technology and operational impacts of using TWIC at pilot sites were not complete, accurate, and reliable. In addition to the pilot planning challenges discussed above, we found that the data collected through the pilot are also not generalizable because of certain pilot implementation and data collection practices we identified. As required by the SAFE Port Act of 2006, the pilot was to test the technology and operational impacts of deploying transportation security card readers at secure areas of the marine transportation system. In addition, as set forth in the TWIC test and evaluation master plan, the TWIC reader pilot was to provide accurate and timely information necessary to evaluate the economic impact of a nationwide deployment of TWIC card readers at over 16,400 MTSA-regulated facilities and vessels, and was to be focused on assessing the use of TWIC readers in contactless mode. However, data were collected and recorded in an incomplete and inconsistent manner during the pilot, further undermining the completeness, accuracy, and reliability of the data collected at pilot sites. Table 2 presents a summary of TWIC reader pilot data collection and supporting documentation reporting weaknesses that we identified that affected the completeness, accuracy, and reliability of the pilot data, which we discuss in further detail below. 1. Installed TWIC readers and access control systems could not collect required data on TWIC reader use, and TSA and the independent test agent did not employ effective compensating data collection measures. The TWIC reader pilot test and evaluation master plan recognizes that in some cases, readers or related access control systems at pilot sites may not collect the required test data, potentially requiring additional resources, such as on-site personnel, to monitor and log TWIC card reader use issues. Moreover, such instances were to be addressed as part of the test planning. However, the independent test agent reported challenges in sufficiently documenting reader and system errors. For example, in its monthly communications with TSA, the independent test agent reported that the logs from the TWIC readers and related access control systems were not detailed enough to determine the reason for errors, such as biometric match failure, an expired TWIC card, or that the TWIC was identified as being on the list of revoked credentials. The independent test agent further reported that the inability to determine the reason for errors limited its ability to understand why readers were failing, and thus it was unable to determine whether errors encountered were due to TWIC cards, readers, or users, or some combination thereof. As a result, according to the independent test agent, in some cases the TWIC readers and automated access control systems at various pilot sites were not capable of collecting the data required to assess pilot results. According to the independent test agent, this was primarily due to the lack of reader messaging standards—that is, a set of standard messages readers would display in response to each transaction type. Some readers used were newly developed by vendors, and some standards were not defined, causing inconsistencies in the log capabilities of some readers.manufacturers and system integrators—or individuals or companies that integrate TWIC-related systems—were not all willing to alter their systems’ audit logs to collect the required information, such as how long a transaction might take prior to granting access. Both TSA and the independent test agent agree that this issue limited their ability to collect the data needed for assessing pilot results. The independent test agent noted that reader According to TSA officials, TSA allowed pilot participants to select their own readers and related access control systems and audit logs. Consequently, TSA could not require that logs capable of meeting pilot data collection needs be used. In addition, TSA officials noted that determining the reason for certain errors, such as biometric match failures, could be made only while the independent test agent was present and had the time and ability to investigate the reason that a TWIC card had been rejected by a reader for access. On average, the independent test agent visited each pilot participant seven times during the early operational assessment and system test and evaluation testing period. TSA further noted that the development or use of alternative automated data collection methods would have been costly and would have required integration with the pilot site’s system. However, given that TSA was aware of the data needed from the pilot sites prior to initiating testing and the importance of collecting accurate and consistent data from the pilot, proceeding with the pilot without implementing adequate compensating mechanisms for collecting requisite data or adjusting the pilot design accordingly is inconsistent with the basic components of effective evaluation design and renders the results less reliable. 2. Reported transaction data did not match underlying documentation. A total of 34 pilot site reports were issued by the independent test agent. According to TSA, the pilot site reports were used as the basis for DHS’s report to Congress. We separately requested copies of the 34 pilot site reports from both TSA and the independent test agent. In comparing the reports provided, we found that 31 of the 34 pilot site reports provided to us by TSA did not contain the same information as those provided by the independent test agent. Differences for 27 of the 31 pilot site reports pertained to how pilot site data were characterized, such as the baseline throughput time used to compare against throughput times observed during two phases of testing: early operational assessment and systems test and evaluation. For example, TSA inserted USCG’s 6-second visual inspection estimate as the baseline throughput time measure for all pilot site access points in its amended pilot site reports instead of the actual throughput time collected and reported by the independent test agent during baseline data collection efforts. However, at two pilot sites, Brownsville and Staten Island Ferry, transaction data reported by the independent test agent did not match the data included in TSA’s reports. For example, of the 15 transaction data sets in the Staten Island Ferry ST&E report, 10 of these 15 data sets showed different data reported by TSA and the independent test agent. These differences were found in the weekly transactions and the sum total of valid and invalid transactions. According to TSA officials, it used an iterative process to review and analyze pilot data as the data became available to it from the pilot participant sites. In addition, TSA officials noted that the independent test agent’s reports were modified in order to “provide additional context” and consistent data descriptions, and to present data in a more usable or understandable manner. Specifically, according to TSA officials, they and USCG officials believed that they had more knowledge of the data than the independent test agent and there was a need, in some cases, for intervening and changing the test reports in order to better explain the data. USCG officials further noted that the independent test agent’s draft reports were incomplete and lacked clarity, making revisions necessary to make the information more thorough. TSA also reported that it inadvertently used an earlier version of the report and not the final September 2011 site reports provided by the independent test agent to prepare the report to Congress. In addition to differences found in the EOA and ST&E pilot site reports, we found differences between the data recorded during the independent test agent’s visits to pilot sites versus data reported in the EOA and ST&E pilot site reports. Data recorded during the independent test agent’s visits to pilot sites in trip reports were to inform final pilot site reports. The independent test agent produced 76 trip reports containing throughput data. We examined 34 of the 76 trip reports and found that all 34 trip reports contained data that were excluded or did not match data reported in the EOA and ST&E pilot site reports completed by the independent test agent. According to the independent test agent, the trip reports did not match the EOA and ST&E pilot site reports because the trip reports contained raw data that were analyzed and prepared for presentation in the participant EOA and ST&E pilot site reports. However, this does not explain why data reported by date in trip reports do not match related data in the EOA and ST&E pilot site reports. Having inconsistent versions of final pilot site reports, conflicting data in the reports, and data excluded from final reports without explanation calls into question the accuracy and reliability of the data. 3. Pilot documentation did not contain complete TWIC reader and access control system characteristics. Pilot documentation did not always identify which TWIC readers or which interface (e.g., contact or contactless interface) the reader used to communicate with the TWIC card during data collection. For example, at one pilot site, two different readers were tested. However, the pilot site report did not identify which data were collected using which reader. Likewise, at pilot sites that had readers with both a contact and a contactless interface, the pilot site report did not always identify which interface was used during data collection efforts. According to TSA officials, sites were allowed to determine which interface to use based on their business and operational needs. According to the independent test agent, it had no control over what interface pilot sites used during testing if more than one option was available. Consequently, pilot sites could have used the contactless interface for some transactions and the contact interface for others without recording changes. The independent test agent therefore could not document with certainty which interface was used during data collection efforts. Without accurate documentation of information such as this, an assessment of TWIC reader performance based on interface cannot be determined. This is a significant data reliability issue, as performance may vary depending on which interface is used, and in accordance with the TWIC reader pilot’s test and evaluation master plan, use of the contactless interface was a key element to be evaluated during the pilot. 4. TSA and the independent test agent did not record clear baseline data for comparing operational performance at access points with TWIC readers. Baseline data, which were to be collected prior to piloting the use of TWIC with readers, were to be a measure of throughput time, that is, the time required to inspect a TWIC card and complete access- related processes prior to granting entry. This was to provide the basis for quantifying and assessing any TWIC card reader impacts on the existing systems at pilot sites. Pilot documentation shows that baseline throughput data were collected for all pilot sites. However, it is unclear from the documentation whether acquired data were sufficient to reliably identify throughput times at truck, other vehicle, and pedestrian access points, which may vary. It is further unclear whether the summary baseline throughput data presented are based on a single access point, an average from all like access points, or whether the data are from the access points that were actually tested during later phases of the pilot. Further complicating the analysis of baseline data is that there was a TSA version of the baseline report and a separate version produced by the independent test agent, and facts and figures in each do not fully match. Where both documents present summary baseline throughput data for each pilot site, the summary baseline throughput data differ for each pilot site. For example, summary baseline throughput data at one pilot site is reported as 4 minutes and 10 seconds in one version of the report but is reported as 47 seconds in the other report. As a result, the accuracy and reliability of the available baseline data are questionable. Further, according to TSA, where summary throughput data were not included in the baseline report, the independent test agent’s later site reports did contain the data. 5. TSA and the independent test agent did not collect complete data on malfunctioning TWIC cards. TSA officials observed malfunctioning TWICs during the pilot, largely because of broken antennas. The antenna is the piece of technology needed for a contactless reader to communicate with a TWIC. If a TWIC with a broken antenna was presented for a contactless read, the reader would not identify that a TWIC had been presented, as the broken antenna would not communicate TWIC information to a contactless reader. In such instances, the reader would not log that an access attempt had been made and failed. Individuals holding TWICs with bad antennas had presented their TWICs at contactless readers; however, the readers did not document and report each instance that a malfunctioning TWIC was presented. Instead, as noted by pilot participants and confirmed by TSA officials, pilot sites generally conducted visual inspections when confronting a malfunctioning TWIC and granted the TWIC holder access. While in some cases the independent test agent used a card analysis tool to assess malfunctioning TWICs, TSA officials reported that neither they nor the independent test agent documented the overall number of TWICs with broken antennas or other damage. According to TSA officials, the number of TWICs with broken antennas or other damage was not tracked because failed TWIC cards could be tracked only if an evaluator was present, had access to a card analysis tool, and had the cooperation of the pilot participants to hold up a worker’s access long enough to confirm that the problem was the TWIC card and not some other factor. However, it is unclear why TSA was unable to provide a count of TWICs with broken antennas or other damage based on the TWIC cards that were analyzed with the card analysis tool. While TSA could not provide an accounting of TWICs with broken antennas or other damage experienced during the pilot, pilot participants and other data collected provide additional context and perspective for understanding the nature and extent of TWIC card failure rates during the pilot. Officials at one pilot container facility told us that a 10 percent failure rate would be unacceptable and would slow down cargo operations. However, according to officials from two pilot sites, approximately 70 percent of the TWICs they encountered when testing TWICs against contactless readers had broken antennas or malfunctioned. Further, a separate 2011 report commissioned and led by USCG identified problems with reading TWICs in contactless mode during data collection. This report identified one site where 49 percent of TWICs could not be read in contactless (or proximity) mode, and two other sites where 11 percent and 13 percent of TWICs could not be read in contactless mode. Because TWIC cards malfunctioned, they could not be detected by readers. Accordingly, individuals may have made multiple attempts to get the TWIC reader to read the TWIC card; however, each attempt was not recorded and thus TSA does not have an accurate accounting of the number of attempts or time it may have taken to resolve resulting access issues. Consequently, assessments of the operational impacts of using TWIC with readers using the collected data alone should be interpreted cautiously as they may be based on inaccurate data. In discussing these failure rates with TSA officials, the officials reported that TSA does not have a record of a pilot participant reporting a 70 percent failure rate. In addition, they believe that the failure rates reported by pilot sites and the separate USCG-commissioned report are imperfect because they did not have the card analysis tool necessary to confirm a failed TWIC card, and instances where a failed TWIC card was presented at a pilot site could be documented only when the independent test agent was present at the site with a card analysis tool. However, a contractor from TSA visited the facility where the USCG report notes that 49 percent of TWICs could not be read in contactless mode and found that 60 out of 110 of TWIC cards checked, or 54.5 percent, would not work in contactless mode. TSA officials agreed that TWIC card failure rates were higher than anticipated and stated that TSA is continuing to assess TWIC card failures to identify the root cause of the failures and correct for them. TSA is also looking to test the TWIC cards at facilities that have not previously used TWIC readers to get a better sense of how inoperable TWIC cards might affect a facility operationally. 6. Pilot participants did not document instances of denied access. Incomplete data resulted from challenges documenting how to manage individuals with a denied TWIC across pilot sites. The independent test agent reported that facility security personnel were unclear on how to process people who are denied access by a TWIC reader because of a biometric mismatch or other TWIC card issue. In these cases, pilot site officials would need to receive input from USCG as to whether to grant or deny access to an individual presenting a TWIC card that had been denied. According to TSA officials, during the pilot, if a TWIC reader denied access to a TWIC, the facility could visually inspect the TWIC, as allowed under current regulation, and grant the individual access. However, TSA and the independent test agent did not require pilot participants to document when individuals were granted access based on a visual inspection of the TWIC, or deny the individual access as may be required under future regulation. This is contrary to the TWIC reader pilot test and evaluation master plan, which calls for documenting the number of entrants “rejected” with the TWIC card reader system operational as part of assessing the economic impact. Without such documentation, the pilot sites were not completely measuring the operational impact of using TWIC with readers. 7. TSA and the independent test agent did not collect consistent data on the operational impact of using TWIC cards with readers. TWIC reader pilot testing scenarios included having each individual present his or her TWIC for verification; however, it is unclear whether this actually occurred in practice. For example, at one pilot site, the independent test agent did not require each individual to have his or her TWIC checked during throughput data collection.site noted that during testing, approximately 1 in 10 individuals was required to have his or her TWIC checked while entering the facility because of concerns about causing a traffic backup. They said that this approach was used because pilot site officials believed that reading each TWIC would have caused significant congestion. However, the report for the pilot site does not note this selective use of the TWIC card. In addition, officials from another pilot site reported that truck drivers could elect to go to other lanes that were not being monitored during throughput time collection. Officials at this pilot site noted that truck drivers, observing congestion in lanes where throughput time was being collected, used other lanes to avoid delays. This was especially the case when the tested truck lane was blocked to troubleshoot TWIC card and reader problems. However, the pilot site report did not record congestion issues or the avoidance of congestion issues by allowing truck drivers to use alternative lanes where TWIC readers were not being tested. TSA officials also noted that another pilot site would allow trucks entry without using a TWIC reader on an ad hoc basis during the pilot to prevent congestion, making it difficult to consistently acquire the data needed to Officials at the pilot accurately assess the operational impacts, such as the truck congestion resulting from TWIC cards with readers. Despite the noted deviations in test protocols, the reports for these pilot sites do not note that these deviations occurred. In commenting on this issue, TSA officials noted that these deviations occurred most frequently at those facilities with multiple truck or pedestrian access points where readers were installed at a few access points. Most commonly these facilities were large container terminals. Because of the voluntary nature of the pilot, TSA elected to primarily use reader performance data from facilities that did not install and use readers at all access points. TSA officials further noted that the impact of readers on operations at these facilities necessarily was discounted in the final report to Congress. However, pilot documentation shows that container terminals held the largest population of individuals potentially requiring the use of a TWIC. Noting deviations such as those described above in each pilot site report would have provided important perspective by identifying the limitations of the data collected at the pilot site and providing context when comparing the pilot site data with data from other pilot sites. Further, identifying the presence of such deviations could have helped the independent test agent and TSA recognize the limitations of the data when using them to develop and support conclusions for the pilot report on the business and operational impact of using TWICs with readers. 8. Pilot site reports did not contain complete information about installed TWIC readers’ and access control systems’ design. TSA and the independent test agent tested the TWIC readers at each pilot site to ensure they worked before individuals began presenting their TWIC cards to the readers during the pilot. As part of this test, information on how each TWIC reader communicated with TWICs and related access control systems was to be documented. In accordance with TWIC test plans, this testing was to specify, among other things, whether the TWIC reader (1) was contactless or required contact with a TWIC, (2) communicated with a facility’s physical access control system(s) through a wired or wireless conduit, or (3) granted or denied access to a TWIC holder itself or relied on a centralized access system to make that determination. However, the data gathered during the testing were incomplete. For example, 10 of 15 sites tested readers for which no record of system design characteristics were recorded.reader information was identified for 4 pilot sites but did not identify the specific readers or associated software tested. Further, 1 pilot site report included reader information for another pilot site and none for its own. This limited TSA’s ability to assess performance results by various reader and access control system characteristics. The absence of this information is particularly important, as it was the only source of data recorded at pilot sites where reader and operational throughput performance could be assessed at a level of granularity that would allow for the consideration of the array of reader, system design, and entry process characteristics. According to TSA officials, collecting these data was the independent test agent’s responsibility, but the independent test agent did not record and provide all required data. The independent test agent maintains that the data are present. However, we reviewed the documentation, and we did not find the data. As we have previously reported, the basic components of an evaluation design include identifying information sources and measures, data collection methods, and an assessment of study limitations, among other We further reported that care should be taken to ensure that things. collected data are sufficient and appropriate, and that measures are incorporated into data collection to ensure that data are accurate and reliable. Data may not be sufficiently reliable if (1) significant errors or incompleteness exists in some of or all the key data elements, and (2) using the data would probably lead to an incorrect or unintentional message. Moreover, in accordance with Standards for Internal Control in the Federal Government, controls are to be designed to help ensure the accurate and timely recording of transactions and events. Properly implemented control activities help to ensure that all transactions are completely and accurately recorded. Having measures in place to ensure collected data are complete, are not subject to inappropriate alteration, and are collected in a consistent manner helps ensure that data are accurate and reliable. However, as discussed in the examples above, TSA and the independent test agent did not take the steps needed to ensure the completeness, accuracy, and reliability of TWIC reader data collected at pilot sites, and the pilot lacked effective mechanisms for ensuring that transactions were completely and consistently recorded. According to TSA, a variety of challenges prevented TSA and the independent test agent from collecting pilot data in a complete and consistent fashion. Among the challenges noted by TSA, (1) pilot participation was voluntary, which allowed pilot sites to stop participation at any time or not adhere to established testing and data collection protocols; (2) the independent test agent did not correctly and completely collect and record pilot data; (3) systems in place during the pilot did not record all required data, including information on failed TWIC card reads and the reasons for the failure; and (4) prior to pilot testing, officials did not expect to confront problems with nonfunctioning TWIC cards. Additionally, TSA noted that it lacked the authority to compel pilot sites to collect data in a way that would have been in compliance with federal standards. In addition to these challenges, the independent test agent identified the lack of a database to track and analyze all pilot data in a consistent manner as an additional challenge to data collection and reporting. The independent test agent, however, noted that all data collection plans and resulting data representation were ultimately approved by TSA and USCG. However, our review of pilot test results shows that because the resulting pilot data are incomplete, inaccurate, and unreliable, they should not be used to help inform the card reader rule. While TSA’s stated challenges may have hindered TWIC reader pilot efforts, planning and management shortfalls also resulted in TWIC reader pilot data being incomplete, inaccurate, and unreliable. The challenges TSA and the independent test agent confronted during the pilot limited their data collection efforts, which were a critical piece of the assessment of the technology and operational impacts of using TWIC at pilot sites that were to be representative of actual deployment conditions. As required by the SAFE Port Act and the Coast Guard Authorization Act of 2010, DHS’s report to Congress on the TWIC reader pilot presented several findings with respect to technical and operational aspects of implementing TWIC technologies in the maritime environment. DHS reported the following, among other findings: 1. Despite facing a number of challenges, the TWIC reader pilot obtained sufficient data to evaluate reader performance and assess the impact of using readers at ports and maritime facilities. 2. A biometric match may take longer than a visual inspection alone but not long enough to cause access point throughput delays that would negatively impact business operations. 3. When designed, installed, and operated in manners consistent with the business considerations of the facility or vessel operation, TWIC readers provide an additional layer of security by reducing the risk that an unauthorized individual could gain access to a secure area. In addition, the report noted a number of lessons learned. For example, TWIC cards were found to be sensitive to wet conditions, and users experienced difficulty reading messages on the screens of readers not shielded from direct sunlight, which prevented users from determining the cause of access denial, among other things. According to officials from TSA and DHS’s Screening Coordination Office, many of these lessons learned did not require a pilot in order to be identified, but the pilot did make a positive contribution by helping to validate these lessons learned. Additionally, officials from DHS’s Screening Coordination Office noted that they believe that the report to Congress included a comprehensive listing of the extent to which established metrics were achieved during the pilot program, as required by the Coast Guard Authorization Act of 2010. However, according to our review, the findings and lessons learned in DHS’s report to Congress were based on incomplete or unreliable data, and thus should not be used to inform the development of the future regulation on the use of TWIC with readers. Specifically, incomplete TWIC cost data and unreliable access point throughput time data result in an inaccurate description of the impact of TWIC on MTSA-regulated facilities and vessels. Further, data on the security benefits of TWIC were not collected as part of the pilot and therefore the statements made in DHS’s report to Congress are not supported by the pilot data. DHS’s report identified costs for implementing TWIC readers during the pilot. However, the costs reported by DHS do not represent the full costs of operating and maintaining TWIC readers and related systems within a particular site, or the cost of fully implementing TWIC at all sites. First, DHS’s reported costs for implementing TWIC with readers during the pilot did not consistently reflect the costs of implementing TWIC at all access points needed for each facility. For example, DHS’s report correctly notes that 2 container facilities did not implement TWIC readers at all access points and are therefore not reflective of full implementation. However, on the basis of our analysis and interviews with pilot site officials, at least 5 of the remaining pilot sites would need to make additional investments in readers, totaling 7 pilot sites requiring investments beyond reported expenditures. For example, officials at 2 pilot sites told us that they would need to invest in and install additional readers if reader use was required by regulation. Officials at 3 pilot sites told us that their investment in TWIC readers during pilot testing was not representative of how they would invest in TWIC if regulation required that an individual’s TWIC be checked with a reader at each entry. Second, we found that reported implementation costs did not match TSA’s supporting documentation for 4 of 17 pilot sites. TSA told us that this discrepancy may be due to having multiple versions of cost data available and relying on different cost documents when compiling the cost data in the DHS report to Congress. The lack of complete and accurate cost data limits the usefulness of the information provided to Congress and does not help inform the development of the future regulation on the use of TWIC with readers. In addition, DHS reported that facilities and vessels that cease issuing site-specific badges and instead use the TWIC card as the only identification needed for access may benefit financially by reducing card management operational costs associated with identity vetting, card inventory, printing equipment, and issuance infrastructure. However, according to TSA, data in support of this finding are based on the statement of one pilot participant who anticipated utilizing the TWIC and not issuing facility badges for access control. Further, DHS’s Screening Coordination Office officials noted that the proximity and bar code cards that facilities currently use do not contain the same level of security features that the TWIC card does. However, a related March 2011 study on the use of TWIC with readers commissioned and led by USCG noted that there are significant reliability problems with using TWIC cards, which The report further cost $60 each to replace, in the contactless mode. notes that off-the-shelf industry standard proximity and bar code cards are already inexpensively produced and managed at various facilities; are considered much more functionally reliable than the TWIC; and provide better overall security, since the cards and associated access control systems—such as readers and centralized databases—are less prone to failure. Systems Planning and Analysis, Inc. Survey of Physical Access Control System Architectures, Functionality, Associated Components, and Cost Estimates: Prepared for the U.S. Coast Guard Office of Standards Evaluation and Development (CG-523), (Alexandria, Virginia: March 31, 2011). of readers on business operations.comparisons presented in DHS’s report were not throughput times gathered at pilot sites, but reader response times gathered during laboratory testing. The differences between throughput time and reader response times can vary significantly. For example, as recorded during the pilot, throughput time at a facility using a TWIC card reader was 1 minute and 36 seconds, whereas reader response time at the facility was 11 seconds. As noted by DHS, throughput time accounts for conditions at a particular facility or access point, including individual processes. In addition, measuring throughput time with TWIC readers and related systems can also capture variances due to system connectivity (e.g., hardwired or wireless connections), installed readers and interfaces, weather, and integration with access control or other business-related systems—all representative of real-world experiences at a given location or type of access point. However, the times and In contrast, reader response time, as reported by DHS, measures the amount of time a TWIC reader takes to determine whether a TWIC is valid in controlled laboratory settings. Measuring reader response time alone is valuable, as it can help a site determine what amount of increase or decrease in throughput time may be due to TWIC systems alone rather than business processes. However, DHS’s reporting of reader response time data was not based on a specific pilot site or group of sites. Instead, it was based on lab testing, which is not representative of the technology challenges sites may face in practice, such as time lags due to the distance between a reader and supporting computing system, types of infrastructure available to implement the TWIC system, or the various variables that could delay actual transaction times. Accordingly, DHS’s reporting of reader response time is not an effective measure of response time in a real-world environment and therefore is not an accurate representation of response times that might be experienced at maritime ports and facilities. DHS’s report to Congress stated that “when designed, installed, and operated in manners consistent with the business considerations of the facility or vessel operations, TWIC readers provide an additional layer of security by reducing the risk that an unauthorized individual could gain access to a secure area.” Further, in a written statement by DHS officials presented before Congress on June 28, 2012, DHS officials stated that TWIC enhances port facility and vessel security and that the pilot operation also highlighted security and operational benefits associated with readers, including the automation of access control, so that regular users could use their TWICs for quick and easy processing into a port. However, USCG told us that assessment of security benefits was outside the scope of the TWIC reader pilot. Further, TSA confirmed that data regarding the security enhancements provided by TWIC were not collected during the pilot because that was neither the goal nor the legislative mandate of the TWIC reader pilot. Such data might include, for example, data on the number of people turned away at pilot access points for security infractions, information from covert testing at pilot sites, or other types of data to show enhanced security resulting from the implementation of TWIC. Systems Planning and Analysis, Inc. Survey of Physical Access Control System Architectures, Functionality, Associated Components, and Cost Estimates: Prepared for the U.S. Coast Guard Office of Standards Evaluation and Development (CG-523), (Alexandria, Virginia: March 31, 2011). security may be realized by allowing facilities and vessels to use a combination of traditional access control systems with the TSA background check, also known as a security threat assessment. The findings of the study commissioned by USCG and the findings of our prior reviews of TSA’s efforts to demonstrate the validity and security benefits of the TWIC program, coupled with the cost of expanding the program to include the installation of TWIC readers at ports throughout the country, raise significant concerns about the program’s premise and effectiveness. While MTSA required the Secretary of Homeland Security to issue biometric transportation security cards to individuals for unescorted entry to secure areas of vessels or facilities, TSA did consider other models for implementing the TWIC program and enhancing security. However, we have found that key reasons for electing to proceed with a government-issued TWIC card have not been validated in practice. Specifically, in February 2005, TSA completed an analysis of alternatives that identified two viable models for implementing TWIC in accordance with MTSA requirements and worthy of additional consideration: (1) a federally managed option wherein the federal government would issue a credential and manage all aspects of the credentialing program except for making access control decisions at entry points to regulated operations, and (2) a federally regulated, decentralized option with a more limited federal role in which the federal government would conduct background checks and MTSA-regulated entities would be responsible for all other aspects of enrolling individuals and implementing a credential system that would comply with federal regulations. The analysis of alternatives concluded that the federally managed option would best meet security needs and stated mission needs, including ensuring that (1) unauthorized individuals would be denied access to secure areas of the nation’s transportation system and (2) individuals failing to maintain their eligibility requirements would have their access permissions revoked, among others. In part, these conclusions were based on the premise that the federally managed TWIC option would first establish and verify an individual’s claimed identity; and once the individual’s identity has been verified, it would be checked against threat and background check information prior to issuing a TWIC; and once a TWIC was issued, cardholder eligibility would continue to be checked. However, in May 2011, we reported that the TWIC program was not meeting its four program goals, or mission needs, because of internal control weaknesses. Among other things, we reported that internal controls in the enrollment and background checking processes were not designed to provide reasonable assurance that only qualified individuals could acquire TWICs, or once issued TWICs, TWIC holders have maintained their eligibility. In August 2005, TSA completed an additional analysis comparing the potential costs and benefits of the two alternatives, concluding that the federally managed solution was the most economical choice because the potential benefits outweigh the costs. As noted in the analysis, reasons for selecting the federally managed approach included assumptions such as the following: The lack of a common credential across the industry could leave facilities open to a security breach with falsified credentials. Under the decentralized federally regulated solution, each facility would have to perform its own background checks instead of leveraging a federal background check or security threat assessment. The federally managed solution would eliminate security weaknesses in existing identification systems by, among other things, having built- in security features such as sponsorship from a trusted individual or company. Transportation Security Administration. Transportation Worker Identification Credential (TWIC) Program Cost Benefit Analysis, Version 1.0. August 31, 2005. not include an assessment of each alternative’s technological maturity and readiness to be used as a security measure at MTSA-regulated entities without impeding commerce. However, as the TWIC reader pilot and the study commissioned by USCG demonstrate, TWIC cards and readers are not operating as envisioned. Moreover, our reviews of the TWIC program using the federally managed option over several years, as well as other credentialing models used at airports and federal agencies, raise questions about the validity of the assumptions TSA made at the inception of the program. For example, in the airport credentialing model, the organization granting access to an individual leveraged the existing federal process for conducting background checks, and there is no requirement for a single federal security credential. The federal government is also able to recover some of the costs of the program through user fees as it is under other credentialing and endorsement models such as the Hazardous Materials endorsement for truck drivers, where applicants pay $89.25 to have their TSA security threat assessments conducted. American Association of Airport Executives and airport operators argue that maintaining their own site-specific credentials enhances security over a standard, centrally issued credential such as TWIC and best leverages the combined local and federal knowledge for determining access decisions. Likewise, federal agencies also issue their agency-specific credentials for controlling access. For example, unlike the currently implemented TWIC program, the airport and federal government’s own agency-specific credentialing models intrinsically rely on organizational sponsorship, such as sponsorship by an employer, to help validate an individual’s identity prior to conducting background checks to enhance security. In discussing these issues, TSA officials noted, however, that the statute as currently written requires the Secretary of Homeland Security to issue the biometric credential, and therefore decentralized issuance of the TWIC may be inconsistent with congressional intent. Furthermore, one of the driving assumptions in the TWIC cost-benefit analysis was that the lack of a common credential across the industry could leave facilities open to a security breach with falsified credentials. However, the validity of this assumption is questionable. As we reported in May 2011, our investigators conducted a small number of covert tests to assess the use of TWIC as a means for controlling access to secure areas of MTSA-regulated facilities. During covert tests of TWIC at several selected ports, our investigators were successful in accessing ports using counterfeit TWICs, authentic TWICs acquired through fraudulent means, and false business cases (i.e., reasons for requesting access). However, our investigators did not gain unescorted access to a port where a secondary port-specific identification was required in addition to the TWIC. The investigators’ possession of TWIC cards provided them with the appearance of legitimacy and facilitated their unescorted entry into secure areas of MTSA-regulated facilities and ports at multiple locations across the country. We have also reported that DHS had not assessed the effectiveness of TWIC at enhancing security or reducing risk for MTSA-regulated facilities and vessels. currently implemented and planned with readers, is more effective than prior approaches used to limit access to ports and facilities, such as using facility-specific identity credentials with business cases. To determine if the internal control weaknesses identified in our May 2011 report still exist, we conducted limited covert testing in late 2012. Our investigators again acquired an authentic TWIC through fraudulent means and were able to use this card and counterfeit TWIC cards to access areas of ports or port facilities requiring a TWIC for entry at four ports. GAO-11-657. other things, TWIC pilot findings, USCG’s risk-based approach to categorizing vessels and facilities, and Maritime Security Risk Analysis Model (MSRAM) terrorist scenarios that could potentially be thwarted by using TWIC. However, we noted the following issues in the supporting analysis. With regard to the TWIC pilot findings, as we previously noted, TSA did not collect data during the TWIC pilot regarding the security enhancements provided by TWIC. According to USCG, assessing security benefits was outside the scope of the TWIC pilot. We therefore cannot assess USCG’s claim in its NPRM that TWIC enhances maritime security. The purpose of USCG’s analysis for categorizing vessels and facilities into risk categories was to allocate where to place readers, not to assess the effectiveness of TWIC or determine the extent to which, or if, use of TWIC with readers would enhance security, reduce risk, or address a specific threat. Rather, USCG assumed that TWIC would help reduce the risk of a terrorist attack at a maritime facility or vessel based on the security threat assessment, but did not consider whether use of the TWIC might introduce a security risk to MTSA-regulated facilities and vessels, or whether use of TWIC would enhance the security beyond efforts already in place. USCG’s NPRM lists three MSRAM terrorist scenarios that, according to USCG, are most likely to be mitigated by the use of TWIC readers—truck bomb, terrorist assault team, and passenger/passerby explosives/improvised explosive device. According to USCG, because the function of the TWIC reader is to enhance access control, the deployment of TWIC readers would increase the likelihood of identifying and denying access to an individual attempting nefarious acts. However, USCG’s preliminary analysis notes that the use of TWIC with readers would not stop terrorists from detonating a truck at the perimeter of a facility, attempting to break through the gates or protective barriers at a facility, or obtaining a TWIC card using fraudulent documents as we did through covert means. As confirmed with USCG officials, its models for assessing the benefit of TWIC do not account for these known security weaknesses. Further, USCG’s draft regulatory impact analysis may lead to an overestimate (or mischaracterization) of the avoided consequences of using TWIC with readers. This is because the calculation is based on the use of TWIC with readers thwarting worst-case terrorist security incidents rather than a range of avoided consequence estimates, some of which would be lower than what was presented in the draft regulatory analysis. While USCG has issued the TWIC-reader NPRM and has asserted benefits to be derived by using TWIC with electronic readers, USCG has not conducted an effectiveness assessment of the TWIC program, as we recommended in 2011; thus, it is unclear whether there will be sufficient time to complete the effectiveness assessment prior to the issuance of the rule. In November 2012, USCG officials reported that they are considering taking steps to assess the effectiveness of TWIC, but noted that given the complexity of the effort, the effectiveness assessment may be better suited for another organization, such as the Department of Homeland Security’s Centers of Excellence, to conduct. We continue to believe that the effectiveness assessment would help inform future requirements for using TWIC with biometric card readers if the study was completed and included as part of the TWIC reader regulatory analysis. Further, given USCG’s leading role in assessing and implementing security programs intended to enhance maritime security, we believe that USCG should continue to be involved in conducting this analysis. With potentially billions of dollars needed to implement the TWIC program, it is important that DHS provide effective stewardship of taxpayer funds and avoid requiring the maritime industry to invest in a program that may not achieve its stated goals. DHS estimates that implementing the TWIC program could cost the federal government and the private sector a combined total of as much as $3 billion over a 10- year period. This does not include an additional estimated $234.2 million (undiscounted) to implement readers at 570 facilities and vessels that the TWIC reader NPRM currently targets. The TWIC reader pilot, conducted at a cost of approximately $23 million, was intended to test the technology and operational impacts of TWIC cards with readers in the maritime environment. However, as a result of weaknesses in the pilot’s planning, implementation, and reporting, data from the TWIC reader pilot cannot be relied upon to make decisions regarding the TWIC card reader rule or the future deployment of the TWIC program. Additionally, the TWIC reader pilot report concluded that TWIC cards and readers provide a critical layer of security at our nation’s ports. However, 11 years after initiation, the TWIC program continues to be beset with significant internal control weaknesses and technology issues, and, as highlighted in our prior and ongoing work and a related USCG report, the security benefits of the program have yet to be demonstrated. The weaknesses we have identified suggest that the program as designed may not be able to fulfill the principal rationale for the program— enhancing maritime security. Correcting technological problems with the cards and readers alone will not address the security vulnerabilities identified in our previous work or the USCG reports. The depth and pervasiveness of the TWIC program’s planning and implementation challenges require a reassessment of DHS’s efforts to improve maritime security through the issuance of a U.S. government-sponsored TWIC card and card readers. It is important that this reassessment occur before the additional investment of funds is made to install TWIC readers at the nation’s ports, at considerable taxpayer expense. Given that the results of the pilot are unreliable for informing the TWIC card reader rule on the technology and operational impacts of using TWICs with readers, Congress should consider repealing the requirement that the Secretary of Homeland Security promulgate final regulations that require the deployment of card readers that are consistent with the findings of the pilot program. Instead, Congress should require that the Secretary of Homeland Security first complete an assessment that evaluates the effectiveness of using TWIC with readers for enhancing port security, as we recommended in our May 2011 report, and then use the results of this assessment to promulgate a final regulation as appropriate. Given DHS’s challenges in implementing TWIC over the past decade, at a minimum, the assessment should include a comprehensive comparison of alternative credentialing approaches, which might include a more decentralized approach, for achieving TWIC program goals. We provided a draft of this report to DHS and DOD for review and comment. DHS provided written comments, which are printed in full in appendix V. DHS, as well as DOD, provided technical comments, which we incorporated as appropriate. In commenting on this report, DHS identified concerns with our findings and conclusions related to the use of the TWIC reader pilot results. For example, DHS asserted that the TWIC reader pilot did obtain data in sufficient quantity and quality to support the general findings and conclusions of the TWIC reader pilot report, and that the pilot obtained sufficient data to evaluate reader performance and assess the impact of using readers at maritime facilities. We disagree with this assertion. Specifically, as discussed in our report, and as confirmed by the supplemental technical comments provided by DHS, the pilot test’s results were incomplete, inaccurate, and unreliable for informing Congress and for developing a regulation about the readers. For example, as discussed in the report: Installed TWIC readers and access control systems could not collect required data, including reasons for errors, on TWIC reader use, and TSA and the independent test agent did not employ effective compensating data collection measures, such as manually recording reasons for errors in reading TWICs. TSA and the independent test agent did not record clear baseline data for comparing operational performance at access points with TWIC readers. TSA and the independent test agent did not collect complete data on malfunctioning TWIC cards. Moreover, in its written comments, DHS confirmed that the voluntary nature of the pilot limited opportunities for random selection of pilot sites, as we noted in our report. Therefore, the results of the pilot cannot be generalized beyond the 17 sites participating in the pilot. Further, according to DHS, we asserted that the pilot data should have been assessed using the same data collection and reporting methods for “determining the reliability of computer-processed data.” We recognize that the voluntary nature of the pilot posed challenges to the department; however, we evaluated the TWIC pilot data against recognized federal guidance for designing evaluations, and Standards for Internal Control in the Federal Government in addition to assessing the reliability of computer-processed data. Because of the significant issues we identified in this report concerning the reliability of the data collected during the pilot, when we sent the draft report to DHS for comment, we recommended that DHS not use the results collected at pilot sites on the operational impacts of using TWIC with readers to inform the upcoming TWIC card reader rule or the future deployment of the TWIC program. However, subsequent to sending the draft to DHS for comment, on March 22, 2013, USCG published the TWIC card reader NPRM, which included results from the TWIC card reader pilot. We subsequently removed the recommendation from the report, given that USCG moved forward with issuing the NPRM and incorporated the pilot results. DHS asserted that some of the perceived data anomalies we cited are not significant to the conclusions TSA reached during the pilot and that the pilot report was only one of multiple sources of information available to USCG in drafting the TWIC reader NPRM. We recognize that USCG had multiple sources of information available to it when drafting the proposed rule; however, the pilot was used as an important basis for informing the development of the NPRM. Thus, we believe that the NPRM is based on findings and conclusions that are inaccurate, and unreliable for informing Congress and for developing the TWIC Card Reader Rule. In its addendum to its agency comments, DHS provides explanations for some of the weaknesses that we identified in the pilot program. We acknowledge these challenges but believe that they support our conclusion that the results of the pilot program should not be used to inform the card reader rule. Further, related to the security benefits of the program, in its written comments, DHS maintains that a common credential used across MTSA- regulated facilities and vessels enhances security. DHS further stated that comparing airport access to maritime port access is inappropriate because most airport workers only access one airport, whereas individuals accessing maritime ports and facilities are more likely to access several different facilities. We recognize the value of conducting the security threat assessment for all workers accessing port facilities; however, TSA has not assessed the security benefits, if any, resulting from use of a common credential versus a port-, facility-, or vessel-based credential. Moreover, we continue to believe, as discussed earlier in this report, that the original assumptions that TSA made when it decided to proceed with the use of TWIC as a common credential are questionable. Thus, a comprehensive comparison of alternative credentialing approaches, which could include a more decentralized approach, would provide the necessary assurance that DHS is pursuing the most effective option for enhancing maritime security. We are sending copies of this report to the Secretaries of Homeland Security and Defense, the Assistant Secretary for the Transportation Security Administration, the Commandant of the United States Coast Guard, and appropriate congressional committees. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VI. The Coast Guard Authorization Act of 2010 required that that the Transportation Worker Identification Credential (TWIC) reader pilot report include (1) the findings of the pilot program with respect to key technical and operational aspects of implementing TWIC technologies in the maritime sector; (2) a comprehensive listing of the extent to which established metrics were achieved during the pilot program; and (3) an analysis of the viability of those technologies for use in the maritime environment, including any challenges to implementing those technologies and strategies for mitigating identified challenges. The act further required that we conduct an assessment of the report’s findings and recommendations. To meet this requirement, we addressed the following question: To what extent were the results from the TWIC reader pilot sufficiently complete, accurate, and reliable for informing Congress and the TWIC card reader rule? To evaluate the extent to which the results from the TWIC reader pilot were sufficiently complete, accurate, and reliable for informing Congress and the TWIC card reader rule, we assessed (1) TWIC reader pilot test planning and preparation activities, (2) pilot implementation and data collection practices, and (3) the findings reported in the Department of Homeland Security’s (DHS) February 2012 report to Congress on the results of the TWIC reader pilot against underlying pilot data. To identify and assess TWIC reader pilot test planning and preparation activities, we reviewed our prior reports and testimonies on the TWIC program issued from September 2003 through May 2011, and key documents related to the TWIC reader pilot. We reviewed the following pilot planning and testing documents to understand the pilot’s design and planned approach, and to assess the extent to which pilot test plans were updated and used since our November 2009 report on the subject matter. TWIC Contactless Biometric Card and Reader Capability Pilot Test, Test and Evaluation Master Plan (TEMP), dated December 2007; TWIC Pilot Concept of Operations Plan, signed February 19, 2009; TWIC Pilot Test Reader Usage Scenarios, dated February 2, 2009; TWIC Initial Technical Test (ITT) Plan, signed March 20, 2009; TWIC Reader Functional Specification Conformance Test (F-SCT) Plan, dated March 2009; Naval Air (NAVAIR) Systems Command’s TWIC Card Reader Environmental and Electrical Test Plan, dated February 28, 2008; TWIC Reader Environmental Specification Conformance Test (E- Space and Naval Warfare Systems Command (SPAWAR), Systems SCT) Plan, dated March 23, 2009; Initial Capability Evaluation Scenarios, Version 1.5, dated June 2008; Center (SSC) Atlantic, TWIC Initial Capability Evaluation Test Plan, Draft Version 1.1, dated November 13, 2008; TWIC Baseline Data Collection Plan, dated January 2009; TWIC Early Operational Assessment (EOA) Test Plan, signed March 18, 2009; and TWIC Reader Pilot Program System Test and Evaluation (ST&E) Test Plan, dated February 2010 (signed August 4, 2011). We further reviewed the TWIC Reader Pilot Program Data Analysis Plan, dated October 2010. The plan was developed in response to our November 2009 recommendations to develop an evaluation plan and data analysis plan to identify pilot data to be collected and associated data collection approaches. We also recommended that the evaluation plan identify areas for which the TWIC reader pilot would not provide the information needed to report to Congress and implement the TWIC card reader rule, and document the compensating information to be collected and an approach for obtaining and evaluating the information obtained through this effort. We assessed the extent to which the TWIC Data Analysis Plan addressed our 2009 recommendations and the extent to which it was used during the pilot. We also reviewed the extent to which two studies commissioned by the U.S. Coast Guard (USCG) addressed our 2009 recommendations. See GAO, Border Security: Improvements in the Department of State’s Development Process Could Increase the Security of Passport Cards and Border Crossing Cards, GAO-10-589 (Washington, D.C.: June 1, 2010). readiness of readers for use during the pilot. Specifically, we considered TSA’s modified approach for testing and assessing reader readiness prior to use at pilot sites as well as the results of the more detailed environmental and functional reader testing conducted. We further reviewed reader testing plans and results to identify and assess the performance criteria used to determine whether tested readers would severely impact pilot site operations or prevent the collection of useful pilot data. To identify and assess the pilot as implemented, we reviewed relevant legislation, such as the Maritime Transportation Security Act of 2002 (MTSA), amendments to MTSA made by the Security and Accountability For Every Port Act of 2006 (SAFE Port Act), and the Coast Guard Authorization Act of 2010 to inform our review of requirements for TWIC and the TWIC reader pilot specifically. We further reviewed key TWIC reader pilot test documents, such as the TWIC reader pilot test and evaluation master plan and underlying test protocols, and compared planned pilot testing and data collection practices with the methods used to collect and analyze pilot data. In doing so, we reviewed and assessed the following documents where TWIC reader pilot results were recorded. TWIC Reader Pilot Program Baseline Report, dated December 2010; TWIC Initial Technical Test Report, dated September 2010; TWIC Card Reader Environmental Specification Conformance and Evaluation Test, signed March 2, 2010; TWIC Reader Pilot Program TWIC Early Operational Assessment Summary Report, signed February 6, 2012; Early operational assessment reports (final reports) provided by TSA and the independent test agent for each of the 17 pilot sites; TWIC Reader Pilot Program System Test and Evaluation Summary Report, signed February 6, 2012; Systems test and evaluation reports (ST&E) (final reports) provided by TSA and the independent test agent for each of the 17 pilot sites; 117 pilot site trip reports where on-site observations were recorded against data recorded in final EOA and ST&E reports; TWIC Reader Pilot Program Data Analysis Plan, dated October 2010; 46 TWIC Program Weekly and Monthly Status Reports provided by the independent test agent; and TSA’s TWIC Reader Pilot Cost Summary Report by Participant. We further assessed TWIC reader pilot data collection efforts against established practices for designing evaluations, assessing the reliability of computer-processed data, as well as internal control standards for collecting and maintaining records. To do so, we identified practices in place and assessed whether measures and internal controls were in place to ensure the resulting data were sufficiently complete, accurate, and reliable. We further interviewed officials representing 14 of the 17 participating pilot sites, the independent test agent (SPAWAR) and relevant agency officials that oversaw or contributed to the pilot results at TSA and USCG about pilot testing approaches, results, and challenges. While information we obtained from the interviews with officials representing 14 of the 17 participating pilot sites may not be generalized across the maritime transportation industry as a whole, because we selected TWIC reader pilot participants located across the nation and representing varying maritime operations, the interviews provided us with information on the views of individuals and organizations that participated in the pilot and could be directly affected by TWIC reader use requirements. We also reviewed pilot site reports and underlying data to assess the extent to which data in these reports were collected and assessed in a consistent and complete manner, so as to ensure the data and the analysis thereof could result in accurate and reliable findings. TSA reported that it relied on each of the final EOA and ST&E reports for each of the 17 pilot sites—a total of 34 reports—as the basis of its report to Congress. Accordingly, we tested the data in each of the 34 reports as follows. 1. We requested that TSA and the independent test agent each provide us with final copies of each pilot site’s EOA and ST&E pilot site reports. We compared the 34 reports provided by TSA with the 34 reports provided by the independent test agent to validate whether the final reports provided by each entity were identical. We also reviewed the 117 pilot site trip reports provided by TSA and the independent test agent. Pilot site trip reports documented observations made by TSA or the independent test agent during visits to each pilot site and were to serve as input to the final EOA and ST&E pilot site reports. Of the 117 pilot site trip reports, 76 contained access point throughput data. We further reviewed 34 of 76 pilot site trip reports to identify the extent to which all collected observations and data were included in the final EOA and ST&E pilot site reports, and to determine if reasons for exclusions, if any, were documented. While information we obtained from our review of the 34 pilot site trip reports compared with the final EOA and ST&E pilot site reports cannot be generalized, the reports provided us with important insight on potential limitations present in reported pilot data. 2. We employed computer-based testing techniques, including the development of a database, to assess the completeness of collected data as well as the consistency of data collected across pilot sites. To do so, we used TWIC reader pilot data results recorded in the TWIC Reader Pilot Program Baseline Report and the 34 final EOA and ST&E pilot site reports. We linked results reported in the baseline report and each pilot site’s EOA or ST&E reports where data were present for a particular pilot site, access point, and reader. These techniques provided us with the following summary and comparative views of collected pilot data, among others, which in part served as the basis of our data analysis: compiled data by pilot site; compiled data on baseline population of users at each pilot site and reported access points; comparison of the total population at baseline to total population reported during the ST&E phase; view of pilot site access point and reader matches across testing results (baseline data, Systems Operational Verification Testing (SOVT) data, EOA data, and ST&E data); view of tested reader and access control system characteristics; comparison of baseline throughput times versus EOA and ST&E throughput times for access points with similar readers used; comparison of data across the pilot to identify trends, if any, in areas such as risk level, facility and vessel type, access point type, access decision location, testing mode throughput and transactions, reader hardware model and software version, reader types (fixed versus portable), interface type (contact versus contactless), communication protocol, whether or not registration was used, the enrollment process, the source of the biometric reference template, and canceled card list input frequency by site; comparison of the total number of access points identified during baseline data collection versus the total of access points tested during the EOA and ST&E phases of the pilot; comparison of the mean, median, and mode based on the ST&E number of throughput transactions; and assessment of testing duration during EOA and ST&E testing phases for both throughput and transaction data collection efforts. We utilized the results of our above-noted testing techniques and data results recorded in the TWIC Reader Pilot Program Baseline Report and the 34 final EOA and ST&E pilot site reports to inform our analysis of the pilot data’s completeness, reliability, and accuracy. We further reviewed the data with TSA—the agency leading the TWIC reader pilot—and the independent test agent to better understand observed anomalies. We also considered input from pilot site officials regarding the testing operations and officials from USCG who contributed to the TWIC reader pilot or are to utilize the results of the pilot to inform their future implementation of TWIC. Last, we reviewed the two reports commissioned by USCG to inform the impending regulation on the use of TWIC cards with biometric readers in consideration of comparative data. We analyzed and compared the TWIC reader pilot data with DHS’s TWIC reader pilot report submitted to Congress to determine whether the findings identified in the report are based on sufficiently complete, accurate, and reliable evidence, and are supported by pilot documentation. In doing so, we leveraged our above-noted assessments of TWIC reader pilot planning and data collection practices. Since our assessment determined that pilot data on TWIC technology and operational performance at pilot sites were incomplete, inaccurate, or unreliable, we did not further report on differences between TWIC reader pilot data and DHS’s TWIC reader pilot report. We focused the remainder of our assessment on three areas that were not identified in our prior analysis: (1) reported costs and statements about cost savings, (2) reported entry times for accessing pilot sites versus reader response times, and (3) statements of enhanced security resulting from the use of TWIC with biometric readers. Reported costs and cost savings. We sought to validate the cost data reported in DHS’s TWIC reader pilot report to Congress against cost data provided by TSA and the independent test agent. We reviewed cost data in the report and compared them with the cost schedule provided by TSA that, according to TSA, served as the central cost data document used in support of the data reported to Congress. We further compared the data in the report to Congress against the data held in individual pilot site reports. In addition, we compared the data in TSA’s central cost data document with cost data in each individual EOA and ST&E pilot site report to assess the extent to which cost data in each matched. We reviewed our prior work and received input from seven pilot participants regarding their planned implementation of TWIC readers and related systems. This enabled us to assess the extent to which costs reported in DHS’s report represented likely costs for fully implementing, operating, and maintaining the use of TWIC with readers at these pilot sites. Last, we reviewed available pilot documentation to identify data demonstrating that cost savings had been realized as a result of implementing the use of TWIC with biometric card readers. We further reviewed the results of a report commissioned by the Coast Guard to inform the impending regulation on the use of TWIC cards with biometric readers. Reported entry time for accessing pilot sites versus reader response time. We reviewed DHS’s TWIC Reader Pilot Program report to Congress to assess the presentation of recorded time measurements. Specifically, we assessed the extent to which the report accurately conveyed entry time for accessing piloted sites, known as throughput time, versus reader response time, known as transaction time. We further assessed the reported time data to identify the extent to which, if at all, throughput time and transaction time data were used interchangeably, could be validated against data from the pilot, and representations made about the data could be validated by data collected during the pilot. Enhanced security. We reviewed DHS’s TWIC Reader Pilot Program report to Congress and identified statements made about security enhancements based on pilot results. We examined available pilot documentation to identify data demonstrating that security at the piloted sites had been realized as a result of implementing the use of TWIC with biometric card readers. We further discussed the lack of supporting pilot data with TSA and DHS and provided opportunities for data to be provided. We also reviewed statements made by DHS officials during a hearing before Congress on the results of the pilot and the results of a report commissioned by USCG to inform the impending regulation on the use of TWIC cards with biometric readers. We further considered two key documents, the TWIC Program Analysis of Alternatives and the TWIC Program Cost Benefit Analysis, which were used to support the decision to execute the TWIC program to enhance security using common credential and biometric card readers. In doing so, we assessed the information presented in the documents and the operational cost and security benefits defined therein as having significant weight on the decision to implement the TWIC program through the use of a federally issued credential and biometric card readers. We then assessed the defined security benefits against our 2011 review of the TWIC program’s security as implemented and subsequent actions taken by TSA and USCG to address recommendations made in the product. Our investigators also conducted limited covert testing of TWIC program internal controls for acquiring and using TWIC at four maritime ports to update our understanding of the effectiveness of TWIC at enhancing maritime security since our work in May 2011. The information we obtained from covert testing efforts is not generalizable, but we believe that the information from our covert tests provided us with important additional perspective and context on the TWIC program. Finally, we reviewed and assessed the security benefits presented in the TWIC reader notice of proposed rulemaking (NPRM) issued March 22, 2013, to determine whether the effectiveness of the noted security benefits was presented. We conducted this performance audit from January 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. We conducted our related investigative work in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Table 3 summarizes key TWIC program laws and milestones for implementing the program through November 2012. From fiscal year 2002 through fiscal year 2012, the TWIC program had funding authority totaling $393.4 million, including $111.4 million in appropriated funds (including reprogramming and adjustments). An additional $151.3 million has been made available to maritime facility and vessel owners and operators through port and transportation security grants related to TWIC. Table 4 provides further funding details. As reported by DHS, the TWIC reader pilot cost approximately $23 million and was funded by appropriated funds and federal security grant awards. In issuing the credential rule, DHS estimated that implementing the TWIC program could cost the federal government and the private sector a combined total of between $694.3 million and $3.2 billion over a 10-year period. However, these figures did not include costs associated with implementing and operating readers, as the credential rule did not require the installation or use of TWIC cards with readers. The notice of proposed rulemaking published on March 22, 2013, estimated an additional cost of $234.2 million (undiscounted) to implement readers at 570 facilities and vessels that the TWIC reader currently targets. In addition to the contact named above, David Bruno (Assistant Director), Joseph P. Cruz (Analyst-in-Charge), David Alexander, Hiwotte Amare, Nabajyoti Barkakati, Chuck Bausell, Justin Fisher, Tracey King, James Lawson, Lara Miklozek, and Anna Maria Ortiz made key contributions to this report.
Within DHS, TSA and USCG manage the TWIC program, which requires maritime workers to complete background checks and obtain biometric identification cards to gain unescorted access to secure areas of Maritime Transportation Security Act (MTSA)-regulated entities. TSA conducted a pilot program to test the use of TWICs with biometric card readers in part to inform the development of a regulation on using TWICs with card readers. As required by law, DHS reported its findings on the pilot to Congress on February 27, 2012. The Coast Guard Authorization Act of 2010 required that GAO assess DHS's reported findings and recommendations. Thus, GAO assessed the extent to which the results from the TWIC pilot were sufficiently complete, accurate, and reliable for informing Congress and the proposed TWIC card reader rule. GAO reviewed pilot test plans, results, and methods used to collect and analyze pilot data since August 2008, compared the pilot data with the pilot report DHS submitted to Congress, and conducted covert tests at four U.S. ports chosen for their geographic locations. The test's results are not generalizable, but provide insights. GAO's review of the pilot test aimed at assessing the technology and operational impact of using the Transportation Security Administration's (TSA) Transportation Worker Identification Credential (TWIC) with card readers showed that the test's results were incomplete, inaccurate, and unreliable for informing Congress and for developing a regulation (rule) about the readers. Challenges related to pilot planning, data collection, and reporting affected the completeness, accuracy, and reliability of the results. These issues call into question the program's premise and effectiveness in enhancing security. Planning. The Department of Homeland Security (DHS) did not correct planning shortfalls that GAO identified in November 2009. GAO determined that these weaknesses presented a challenge in ensuring that the pilot would yield information needed to inform Congress and the regulation aimed at defining how TWICs are to be used with biometric card readers (card reader rule). GAO recommended that DHS components implementing the pilot--TSA and the U.S. Coast Guard (USCG)--develop an evaluation plan to guide the remainder of the pilot and identify how it would compensate for areas where the TWIC reader pilot would not provide the information needed. DHS agreed and took initial steps, but did not develop an evaluation plan, as GAO recommended. Data collection . Pilot data collection and reporting weaknesses include: Installed TWIC readers and access control systems could not collect required data, including reasons for errors, on TWIC reader use, and TSA and the independent test agent (responsible for planning, evaluating, and reporting on all test events) did not employ effective compensating data collection measures, such as manually recording reasons for errors in reading TWICs. TSA and the independent test agent did not record clear baseline data for comparing operational performance at access points with TWIC readers. TSA and the independent test agent did not collect complete data on malfunctioning TWIC cards. Pilot participants did not document instances of denied access. TSA officials said challenges, such as readers incapable of recording needed data, prevented them from collecting complete and consistent pilot data. Thus, TSA could not determine whether operational problems encountered at pilot sites were due to TWIC cards, readers, or users, or a combination of all three. Issues with DHS's report to Congress and validity of TWIC security premise. DHS's report to Congress documented findings and lessons learned, but its reported findings were not always supported by the pilot data, or were based on incomplete or unreliable data, thus limiting the report's usefulness in informing Congress about the results of the TWIC reader pilot. For example, reported entry times into facilities were not based on data collected at pilot sites as intended. Further, the report concluded that TWIC cards and readers provide a critical layer of port security, but data were not collected to support this conclusion. For example, DHS's assumption that the lack of a common credential could leave facilities open to a security breach with falsified credentials has not been validated. Eleven years after initiation, DHS has not demonstrated how, if at all, TWIC will improve maritime security. Congress should halt DHS’s efforts to promulgate a final regulation until the successful completion of a security assessment of the effectiveness of using TWIC. In addition, GAO revised the report based on the March 22, 2013, issuance of the TWIC card reader notice of proposed rulemaking.
Amtrak was established by the Rail Passenger Service Act of 1970. Amtrak operates a 22,000-mile network, primarily over freight railroad tracks, providing service to 46 states and the District of Columbia. (See fig. 2.) In fiscal year 2001, Amtrak served about 23.5 million intercity rail passengers over 43 routes. In addition, Amtrak is the contract operator of seven commuter rail systems. These commuter rail systems served about 63.4 million passengers in fiscal year 2001. Amtrak owns a variety of assets, most notably about 650 miles of track, primarily along the Northeast Corridor. The corridor is used by eight commuter railroads (operated by state and local governments) that serve about 1.2 million passengers each workday, and six freight railroads operating 38 trains per day. Amtrak also owns passenger stations, rail shops, and rail equipment, including passenger cars and locomotives. From fiscal year 1971 through fiscal year 2002, the federal government has provided Amtrak with over $25 billion in operating and capital subsidies. In July 2002, Amtrak employment was about 23,000 people. The railroad retirement system provides retirement and disability benefits to the nation’s retired railroad workers and their survivors (including those of Amtrak), while the railroad unemployment system pays a portion of lost wages to railroad employees who lose their jobs or are sick. In fiscal year 2001, the Railroad Retirement Board paid about $8.4 billion (net of recoveries) in retirement and survivors’ benefits to about 700,000 beneficiaries, and about $95 million in unemployment and sickness benefits to about 40,000 railroad workers. Railroad retirement payroll taxes are made up of tier I and tier II taxes, and are used to pay tier I, tier II, and supplemental annuity benefits. Employers and employees pay tier I taxes at the same rate as social security taxes, and benefits are based on combined railroad and nonrailroad service. Tier I benefit amounts are generally the same as those paid under the Social Security Act. Tier II taxes are used to finance railroad retirement pension benefits over and above social security levels. Under the Railroad Retirement and Survivors’ Improvement Act of 2001, employer tier II tax rates are set at 15.6 percent and 14.2 percent for 2002 and 2003, respectively. Beginning in 2004, tier II tax rates will be determined based on the calculation of an assets-to- benefits payout ratio. Employee tier II taxes are 4.9 percent for 2002 and 2003, and then capped at 4.9 percent thereafter. The act did not change tier I tax rates. Should Congress decide to liquidate Amtrak as part of a restructuring of intercity passenger rail service or should Amtrak’s financial condition force it to file for bankruptcy, Amtrak must do so under chapter 11 of the Bankruptcy Code. This chapter contains provisions regarding the management and reorganization of debtors, including railroads, and specifies the circumstances under which a railroad may be liquidated. Among other things, chapter 11 seeks to protect the public interest in maintaining continued rail service. However, a railroad may be liquidated upon the request of an interested party (such as a creditor) if the court determines liquidation to be in the public interest. A railroad must be liquidated if a plan for reorganizing it has not been confirmed within 5 years of its filing for bankruptcy. An appointed trustee plays a key role and, subject to the court’s review, directs the railroad and its affairs during bankruptcy. In liquidation, the trustee administers the distribution of the railroad’s assets in accordance with the Bankruptcy Code. (See app. I for a discussion of the significant aspects of the railroad bankruptcy process.) If Amtrak had been liquidated on December 31, 2001, secured and unsecured creditors, including the federal government and Amtrak’s employees, and stockholders (preferred and common) would have had about $44 billion in potential claims against and ownership interests in Amtrak’s estate. The federal government would have been by far the largest secured creditor (for property and equipment) and would have had the largest stockholder interest (in preferred stock), together representing about 80 percent (about $35.7 billion) of the $44 billion amount. Of the $4.4 billion in unsecured claims, Amtrak’s employees would have had potential claims for about $3.2 billion in labor protection payments (payments that Amtrak would owe to terminated employees stemming from collective bargaining agreements). Amtrak’s employees would also have had other unsecured claims for such things as vacation pay and injury claims, and retirees would have had claims for post-retirement medical benefits. It is not likely that secured and unsecured creditors’ claims and would have been fully satisfied, because Amtrak’s assets—other than the Northeast Corridor—available to satisfy these claims and interests (such as equipment and materials and supplies) are old, have little value, or might not have a value equal to the claims against them. The market value of Amtrak’s most valuable asset (the Northeast Corridor) has not been tested. While the Corridor has substantial value, it is subject to easements and has billions of dollars of deferred maintenance. Furthermore, it is not likely that the stockholders would have received any payment for their ownership interest. Amtrak’s secured creditors would have had about $22.4 billion in claims against the recorded amount of its property and equipment as of December 31, 2001. (See table 1.) In general, secured creditors are able to attach the property and equipment that were pledged as collateral to secure Amtrak’s debt to pay their claims. To the extent that individual secured creditors’ claims exceed the liquidation proceeds of specifically pledged property and equipment, the excess outstanding indebtedness would become unsecured claims. Among all of Amtrak’s secured creditors, the U.S. government would have had the largest claim to payments from the sale of Amtrak’s assets in liquidation. Federal secured claims would have been on Amtrak’s real property (up to $14.2 billion) and equipment ($4.4 billion) for a combined total of 83 percent of all secured creditor claims. These claims largely arise from two promissory notes issued by Amtrak and held by the federal government. The first note represents a secured interest on Amtrak’s real property (primarily Amtrak’s Northeast Corridor) and matures in about 970 years (December 31, 2975). In June 2001, in conjunction with Amtrak’s mortgage of a portion of Pennsylvania Station in New York City, the federal government strengthened its position regarding this note by making the principal and interest due and payable if Amtrak files for bankruptcy and is liquidated or if Amtrak defaults under the mortgage.Prior to that date, acceleration of the due date would have required enactment of a statute requiring immediate payment, and there would have been no interest payable unless the due date had been accelerated. On the basis of information provided by the Federal Railroad Administration, we calculate that if Amtrak had been liquidated on December 31, 2001, about $14.2 billion in principal and interest would have been due and payable on the note. Satisfaction of this claim from the sale of the secured assets would depend on the market value of the property—the amount due is limited to the fair market value of the property. The market value of the Northeast Corridor has not been tested; furthermore, commuter and freight railroad easements, about $4 billion in deferred maintenance, and the extent to which this property could be used for telecommunications and other utilities could affect its ultimate value. In the event of liquidation, the federal government could pursue several options, including transferring ownership of these assets to an entity or entities that would allow continued rail use. The second federal note is secured by a lien on Amtrak’s passenger cars and locomotives. This note matures on November 1, 2082, with successive 99-year renewal terms. If Amtrak had been liquidated on December 31, 2001, this note would have been accelerated, and about $4.4 billion in principal and interest would have become immediately payable. Similar to its actions regarding the first note, the federal government acted in 2001 to strengthen its claim. Federal Railroad Administration officials told us that the lien securing the original note required the government to subordinate its lien on the equipment acquired by Amtrak after 1983 (the date of the original note) in individual transactions to the security interest of Amtrak’s equipment creditors in these transactions. This was done to assist Amtrak in obtaining financing from the private sector. Amtrak’s June 2001 mortgage of Pennsylvania Station amended the original real property mortgage discussed above to provide the federal government with a security interest in all other real and personal property held by Amtrak as of June 20, 2001, that was not otherwise encumbered, and any real and personal property acquired by Amtrak after that date. Although the amendment to the mortgage strengthened the federal government’s security interest in otherwise unencumbered property, it did not change its priority with respect to other secured creditors of Amtrak’s equipment. This, in addition to the fact that the equipment is old, with limited market value in liquidation, means that the federal government would probably not have realized much, if anything, from the second federal note had Amtrak been liquidated on December 31, 2001. The majority of the non-U.S. government lenders’ secured property claims would have been associated with passenger cars and equipment ($1.5 billion), locomotives ($941 million), and Northeast Corridor property ($673 million). It is not likely these creditors’ claims would have been fully satisfied in liquidation, because a substantial portion of Amtrak’s equipment is old and may not have had a value equal to the outstanding loan amount. As of March 2002, approximately 36 percent of Amtrak’s active equipment—that is, passenger cars, locomotives, mail/baggage/express cars, and auto carriers—had an average age of 20 years or more. Age was even more of a factor when looking at certain equipment types. For example, about 63 percent of Amtrak’s passenger car fleet and about 34 percent of its locomotives had an average age of 20 years or more. Old equipment, even if well maintained, could potentially limit the proceeds obtained in a liquidation. This problem could be compounded if a substantial amount of equipment were placed on the market at the same time. In contrast, some non-U.S. government lenders’ claims on Amtrak’s real property could be more valuable than claims on equipment. That is because stations and maintenance facilities could be refurbished to provide continuing use for either their intended or alternative purposes. Amtrak’s recent acquisition of new passenger cars and locomotives and its efforts to update facilities have resulted in a significant increase in the level of private debt. From September 1997 (the date at which we measured liabilities in our 1998 report on a possible Amtrak liquidation) to December 2001, Amtrak’s private secured creditor claims for both property and equipment increased by 245 percent, from $1.1 billion to $3.8 billion. For the most part, Amtrak’s private-sector financing of equipment and property acquisitions comes from debt and long-term leases. However, in recent years Amtrak has sold some of its equipment and leased it back—through what are called sale-leaseback arrangements. Under these arrangements, the buyer holds title to the equipment and Amtrak receives cash, as well as possession of the equipment. As of December 31, 2001, about 24 percent of Amtrak’s outstanding private debt liability ($924 million) was in sale-leaseback arrangements. (See table 2.) This debt primarily relates to four sale-leaseback transactions Amtrak entered into in fiscal year 2000, involving about 600 passenger cars. In the event of liquidation, because the lessors involved in these transactions own the equipment, their secured creditor position remains intact. In addition, in conjunction with these transactions, a total of about $830 million of Amtrak’s sale proceeds were put into a trust account and recorded as assets on Amtrak’s financial records. Because these funds were specifically earmarked to service the original debt liability associated with the sale-leaseback arrangements, in liquidation they would not necessarily be available to satisfy general creditors’ claims. In response to your interest, we found that 68 percent of Amtrak’s outstanding debt as of December 31, 2001—other than debt held by the U.S. government—was held by, or at least initially was connected with, foreign participants. (See table 3.) Foreign interests accounted for about 72 percent of debt on equipment and 50 percent of debt on property. As of December 31, 2001, Amtrak’s data showed that unsecured liabilities totaled about $4.4 billion. (See table 4.) About 70 percent of this amount would have been for labor protection payments if Amtrak had been liquidated. The largest remaining obligations were for materials and services provided by vendors ($304 million), unpaid employees’ wages and vacation and sick pay ($278 million), and injury claims from passengers, employees, and others ($218 million). In the event of liquidation, the payment of unsecured creditors’ claims would have been even more doubtful than those of secured creditors. The amount of labor protection payments represents the biggest difference between the unsecured creditor claims that were included in our 1998 report on this issue and current estimates. In 1998, we reported that labor protection obligations as of September 1997 could have been about $6 billion if Amtrak had been liquidated, or about $2.9 billion more than the amount that Amtrak estimates could have been due on December 31, 2001. This difference stems from changes made by the Amtrak Reform and Accountability Act of 1997. The act eliminated the statutory right to labor protection, made labor protection subject to collective bargaining, and required Amtrak to negotiate new labor protection arrangements with its employees. After Amtrak and unions could not reach agreement, an October 1999 arbitration decision (1) capped labor protection payments at a 5-year maximum (rather than 6 years, as under the statutory labor protection arrangement); (2) made employees who had less than 2 years of service ineligible for payments; and (3) based payments on a sliding scale that provided less payout for each year worked than did the previous system. (See table 5.) Amtrak indicated that $1.8 billion of the cost difference between 1997 and 2001 is attributable to these changes. Another $950 million in the difference between the earlier and current estimates is attributable to management employees who were no longer eligible for labor protection after 1997. According to Amtrak, management eligibility for labor protection ended in 1997 because management employees were not represented by a formal labor organization and, therefore, could not bargain for new labor protection arrangements as required by the Amtrak Reform and Accountability Act of 1997. Amtrak officials noted that the act provided for no process to determine substitute protection for these employees. Included in Amtrak’s estimate of labor protection costs is about $70 million for 423 employees who work on trains that receive state financial support. In June 2002, an arbitration panel determined that Amtrak would be responsible for labor protection payments for these employees should they lose their jobs because Amtrak decides to discontinue state- supported train service. However, the panel determined that Amtrak’s potential liability would be only one-third of the amount provided to employees on other routes if discontinuation of such service were solely a state’s decision. Satisfying more than a small amount of unsecured creditor claims in liquidation would be difficult at best. Unsecured creditors depend entirely on the proceeds from the sale of Amtrak’s available assets that remain after secured assets are sold to satisfy secured creditor interests. As of December 31, 2001, all of Amtrak’s rolling stock was encumbered by liens and would not have been available to satisfy unsecured creditor claims. In addition, it is uncertain whether Amtrak’s real property, such as that on the Northeast Corridor, would be available for sale to satisfy unsecured creditor claims either. That is because the federal mortgage on this real property would become due and payable if Amtrak filed for bankruptcy and were liquidated. In this event, the federal government could take ownership of this property in lieu of foreclosure. To the extent that the value of the Northeast Corridor is insufficient to fully satisfy the federal security interest, the assets of the Northeast Corridor would be unavailable to satisfy unsecured creditor claims. Unsecured creditors would likely have to rely on other sources of payment, such as the sale of receivables due to Amtrak (for example, amounts due from travel agents and credit card companies that participate in the sale of Amtrak’s tickets) or the sale of materials and supplies (for example, spare parts and fuel). As of December 31, 2001, these other assets totaled about $218 million. Amtrak estimates that between $59 million and $90.7 million of its receivables (65 to 100 percent of their value) might be recovered in cash. In contrast, much of Amtrak’s spare parts inventory is unique to Amtrak’s operations, and Amtrak estimates that only about 35 percent ($44.5 million) of the $127.1 million on Amtrak’s balance sheet for materials and supplies might be recovered. Given this situation, it is likely that unsecured creditors would receive little for their claims. The U.S. government holds all of Amtrak’s preferred stock, and four corporations hold Amtrak’s common stock. The preferred and common stock had recorded values of about $10.9 billion and $94 million, respectively, as of December 31, 2001. In addition, in accordance with Amtrak’s enabling legislation and its articles of incorporation, preferred stock holders were entitled to an annual cumulative dividend of at least 6 percent until 1997, when the statute was amended to eliminate the requirement that preferred stock holders are entitled to dividends.Although no dividend has ever been declared or paid, Amtrak has calculated the cumulative unpaid preferred stock dividends from 1981 to 1997 to be about $6.2 billion. In a liquidation, the amount of the preferred stock holders’ interest would include all cumulative unpaid dividends. Thus, the total stockholder interest for the federal government as the sole preferred stock holder is about $17.1 billion. These stockholder interests would not get paid until after secured, unsecured, and administrative expenses relating to liquidating the estate were satisfied. As discussed earlier, it is not likely that secured or unsecured creditor claims would have been fully satisfied had Amtrak been liquidated. The amount of the stockholder interest is the total of the recorded value of the common and preferred stock, plus the cumulative unpaid preferred stock dividends. However, in determining how much these stockholders would get paid is dependent on the value of Amtrak’s assets after creditors’ claims are paid, which would include (or be offset by) the amount of Amtrak’s retained earnings (or cumulative losses). As of December 31, 2001, Amtrak’s cumulative deficit was $16 billion, which represents its cumulative losses. As a result of these factors, it is not likely that either the federal government or common stock holders would have received any money for their stock holdings if Amtrak had been liquidated. We have concluded that the United States would not be legally liable for either secured or unsecured creditors’ claims in the event of an Amtrak liquidation. There are two primary reasons. First, the federal government is not a party to contracts between Amtrak and its creditors. Second, Amtrak is not a department, agency, or instrumentality of the U.S. government, and there is no explicit or implicit commitment by the United States to assume these obligations. Therefore, any losses experienced by Amtrak’s creditors would be borne in full by the creditors themselves or their insurance companies. Nevertheless, we recognize that creditors may attempt to recover losses from the U.S. government. The Railroad Retirement Board estimated that Amtrak’s liquidation would have caused the railroad retirement system to run out of funds in 2024 if all Amtrak employees had lost their jobs and were not reemployed in the railroad industry. To forestall this result, the Board estimated that the rates contained in the tier II tax rate schedule would have had to be increased 1.64 percentage points over those planned, resulting in a rise from 20.5 percent and 23.0 percent, respectively, to about 22.1 percent and 24.6 percent in calendar years 2002 and 2023. These are between 7 and 8 percent increases. Rates would have continued to be higher in subsequent years. In addition, the railroad unemployment system would have had to borrow over $300 million to make benefit payments and remain financially solvent. (All amounts are in constant 2001 dollars, unless otherwise stated.) Since the retirement system is on a modified pay-as-you-go basis, the financial health of the system largely depends on the size of the railroad workforce, the taxes derived from this workforce, and the amount of benefits paid to retired and disabled individuals and their beneficiaries. Payroll taxes levied on employers and employees are the primary source of the retirement system’s income. In 2001, Amtrak paid about $428 million in payroll taxes into the railroad retirement account (about 9 percent of the total receipts for the year). A loss of Amtrak’s contribution would have had a significant financial impact on the system. The Board estimated that, if Amtrak had been liquidated on December 31, 2001, and no action had been taken to increase tier II payroll taxes beyond that already planned or to reduce benefit levels, the railroad retirement account would start to decline in 2006 and would first have a negative balance (of $742 million) in 2024. (See fig. 3.) If tier II taxes had been increased immediately (that is, in 2002) to offset expected deficits beginning in 2024, the Board determined that tier II tax rates would have had to increase from a baseline of 20.5 percent of earnings (if Amtrak had not been liquidated) to about 22.1 percent in 2002—an increase of 8 percent. (See fig. 4.) The rate would have decreased somewhat in 2003 before leveling off through 2018. In all cases, the Board estimated that rates would be 1.64 percentage points greater than if Amtrak did not undergo liquidation. After 2018, the rate would have increased to about 24.6 percent in 2023 (about 7 percent greater than the baseline rate of 23.0 percent). Although these actions would have kept the fund from having a negative balance, fund balances would have decreased markedly to $3.9 billion in 2024, according to the Board. An Amtrak liquidation could also have affected tier I tax revenues and benefit payments. These are the social security equivalent components of railroad retirement. The Board estimated that if Amtrak had been liquidated on December 31, 2001, tier I tax revenues would have decreased beginning in 2002 (about $200 million), and the shortfalls would have increased each year until 2024, when lost revenue would total about $310 million. Similarly, the Board estimated that benefit payments would also have changed. From 2002 through 2005, benefits would have increased slightly—up to $6 million in 2002 and 2003—as the result of Amtrak employees’ retiring and beginning to collect benefit payments. Beginning in 2006 benefit payments would have decreased each year until 2024, when the reduction would have been about $160 million. Benefits would decrease because Amtrak employees would no longer be earning tier I service credits and therefore would not be entitled to tier I benefits. Board officials told us that an Amtrak liquidation would have had little impact on the administration of tier I taxes and benefits, since the Board would adjust (1) the amounts of monthly advances that it receives from Treasury to make expected benefit payments and (2) the annual reconciliation with the Social Security Administration and the Center for Medicare and Medicaid Services for taxes received and benefits paid (called the “financial interchange”). Social Security Administration officials agreed. They also said that the overall impact on the Social Security Trust Fund would likely have been slight, since tier I tax revenues and benefit payments make up a very small portion of total social security tax receipts and payments. Finally, participants in the railroad unemployment system would also have been adversely affected by an Amtrak liquidation. Financial effects would have been immediate, but short-term. The Board estimated that if Amtrak had been liquidated on December 31, 2001, separated Amtrak employees would have received a total of $344 million in benefit payments during fiscal years 2002 and 2003. The cash reserves of the unemployment system would have been exhausted in 2002, and a total of $338 million would have to have been borrowed from the railroad retirement account, as permitted by statute, from 2002 through 2004 to make these benefit payments. The peak loan balance would have been $349 million, including interest, with all loans repaid in 2005. In order to pay for these benefits and repay the loans, the Board would have had to require that other railroads and participants in the unemployment system increase their payroll tax contributions. According to the Board, between 2002 and 2004, the average tax rate would have had to increase from about 4 percent to 12.5 percent—before decreasing to 9.6 percent in 2005. We provided a draft of this report to Amtrak, the Department of Transportation, and the Railroad Retirement Board for their review and comment. Amtrak provided its comments in a meeting with its Vice President for Financial Analysis (and others) and in a subsequent letter (see app. II). Amtrak stated that it was in general agreement with the draft report and that the report fairly represented the costs and ramifications of an Amtrak liquidation. However, Amtrak believed that there would be material consequences of liquidation about which the draft report is silent. In Amtrak’s estimation, a liquidation could burden commuter and freight railroads (especially on the Northeast Corridor) with substantial operating and capital costs—about $600 million annually. We agree that the potential financial and operational impacts on commuter and freight railroads could be substantial if Amtrak were to be liquidated. We acknowledged this impact both in the draft report supplied to Amtrak for comment and in this final report. Amtrak also believed that we did not provide sufficient information on the costs associated with administering an Amtrak liquidation. Amtrak estimated that these costs would range anywhere from $250 million to $360 million. We agree that there could be substantial costs associated with administering liquidation. However, this report is not intended to estimate the administrative costs of liquidating Amtrak. Finally, in our meeting, Amtrak officials noted that the interest of the preferred stock holder (the U.S. government) would be about $6 billion more than the $10.9 billion we originally estimated in the draft report. This figure represents the cumulative dividends on this stock between 1981 and 1997 that Amtrak never declared or paid. In Amtrak’s opinion, although the Amtrak Reform and Accountability Act of 1997 eliminated the statutory requirement for these dividends after 1997, it did not abrogate the $6 billion in cumulative dividends during that period—an amount that Amtrak believes would increase preferred stock holder interest in a liquidation. We noted that this $6 billion was not expressly disclosed in Amtrak’s financial statements, including its draft 2001 financial statements, and brought this to Amtrak’s, and its external auditor’s, attention for possible future disclosure. We agree that upon liquidation the preferred stock holder interest would include the $6 billion in cumulative dividends. As a result, we have revised this final report to include the $6 billion both in the total amount of potential creditor claims and stockholder interests were Amtrak to have been liquidated as of December 31, 2001, and in those sections of the report discussing preferred stock holder interests. Amtrak offered additional clarifying, editorial, and technical comments that were incorporated as appropriate. The Department of Transportation, in oral comments made by Federal Railroad Administration officials, including the Associate Administrator for Railroad Development, did not express an overall opinion about the report. Instead, it offered comments designed to clarify specific points in the draft report. These included clarification that the lien securing the original equipment note required the federal government to subordinate its interest on the equipment acquired by Amtrak after 1983 in individual transactions to the security interests of Amtrak’s equipment creditors in these transactions; that is, the subordination was not discretionary. It also included clarification that any unemployment insurance benefits received by Amtrak employees as the result of a liquidation would reduce their labor protection claims by an equal amount. With few exceptions we incorporated these comments into our report. The Railroad Retirement Board provided comments by E-mail from its General Counsel. These comments were largely clarifying and technical in nature and, with few exceptions, were incorporated into the report. One of the more significant was the comment that railroad unemployment insurance claims are accorded priority in bankruptcy and that, in liquidation, Amtrak’s railroad unemployment insurance costs would be borne by other rail employers. To identify the potential financial issues of an Amtrak liquidation on the federal government, Amtrak employees, and other creditors, we obtained information from Amtrak about potential secured and unsecured creditor claims and equity interests held by preferred and common stock holders, analyzed Amtrak’s records regarding property and equipment leases and debt instruments, and discussed labor protection issues with Amtrak officials. We also reviewed copies of federal mortgages and liens held on Amtrak property and equipment, and discussed with Federal Railroad Administration officials how the federal interest in Amtrak’s assets had changed since we reported on this issue in 1998. We reviewed a draft Amtrak analysis of the cost of liquidating the corporation, prepared in March 2002. We obtained information on various aspects of this analysis from Amtrak, including how certain cost estimates were determined. We assumed that Amtrak liquidation had occurred on December 31, 2001, which was the latest date for which Amtrak had information on its assets and liabilities at the time of our review. We updated financial information in this report to take into account adjustments made by Amtrak through August 2002 as the result of its annual audit. However, the audit report had not been issued as of early September 2002. (Amtrak’s fiscal year ends on September 30.) To assess how the railroad retirement and unemployment systems might be affected by liquidation, we asked the Railroad Retirement Board to estimate the potential financial effects of a 100 percent decline in Amtrak employment on the railroad retirement and unemployment systems. Additionally, the Board assumed that terminated workers would not be reemployed in the railroad industry. We chose these assumptions because a 100 percent decline in Amtrak employment is consistent with a liquidation of the company. In addition, the assumption that terminated workers would not be reemployed in the industry is consistent with the fact that industry employment has generally been falling over the past decade, and the Railroad Retirement Board projects that industry employment will continue to decline. This analysis included consideration of changes in the system stemming from the Railroad Retirement and Survivors’ Improvement Act of 2001. We discussed with Board officials both the results of this analysis and the assumptions used to prepare it. We did not independently estimate the costs associated with Amtrak’s liquidation, including developing or obtaining estimates of the market value of Amtrak’s assets. Nor did we independently verify the Board’s analysis of the financial effects on the railroad retirement and unemployment systems from a potential Amtrak liquidation. We also did not attempt to quantify the costs of indirect effects, if any, such as changes in highway and aviation congestion, air quality, or energy consumption associated with Amtrak’s liquidation. We performed our work from January 2002 to September 2002 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 21 days from the report date. At that time, we will send copies of this report to congressional committees with responsibilities for intercity passenger rail issues; the President of Amtrak; the Secretary of Transportation; the Administrator, Federal Railroad Administration; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact either James Ratzenberger at ratzenbergerj@gao.gov or me at heckerj@gao.gov. Alternatively, we may be reached at (202) 512-2834. Key contributors to this report included John Fretwell, Richard Jorgenson, Oscar Mardis, Chanetta Ramey Reed, James Ratzenberger, Peggy Smith, and Stacey Thompson. Chapter 11 of the Bankruptcy Code, which generally sets out the procedures for reorganization, would govern an Amtrak bankruptcy. For the most part, the provisions of chapter 11 applicable to corporate reorganizations would apply to Amtrak, as would several additional provisions applicable only to railroads. Because of the historical importance of railroads to the economy and the public, bankruptcy law seeks, among other things, to protect the public interest in continued rail service. In applying certain sections of the Bankruptcy Code, the court and an appointed trustee of Amtrak’s estate would be required to consider the public interest as well as the interests of Amtrak, its creditors, and its stockholders. A trustee must be appointed in all railroad cases. Amtrak could initiate a bankruptcy proceeding by filing a voluntary petition for bankruptcy when authorized by its board of directors. In addition, three or more of Amtrak’s creditors whose unsecured claims totaled at least $10,000 could file an involuntary petition. After a petition was filed, a trustee would be appointed. This individual would be chosen from a list of five disinterested persons willing and qualified to serve. The Secretary of Transportation would submit this list to the U.S. Trustee (an official in the Department of Justice) for the region in which the petition was filed. The trustee would become the administrator of the debtor’s estate and, with court approval, would be likely to hire attorneys, accountants, appraisers, and other professionals who would be disinterested persons to assist with the administration of the estate. Once appointed, the trustee, with court oversight, rather than Amtrak’s board of directors would make decisions about the railroad’s operations and financial commitments. The trustee would have to decide quickly whether Amtrak could continue to maintain adequate staff for operations. In addition, the trustee would have to decide whether Amtrak would need rolling stock equipment, such as passenger cars and locomotives, subject to creditors’ interests for its operations, and if so, would have to obtain any financing necessary to maintain possession of such equipment. Unless the trustee “cured” any default—that is, continued payments—and agreed to perform obligations associated with Amtrak’s rolling stock equipment within 60 days of the bankruptcy petition, creditors with an interest in the equipment, such as lessors and secured lenders, could repossess it. Furthermore, the trustee would have to decide whether to assume or reject Amtrak’s obligations under executory contracts and unexpired leases. To assume a contract or lease on which Amtrak was in default, the trustee would have to (1) cure the default or provide adequate assurance that it would be cured, (2) compensate the other party or assure the other party of compensation for actual pecuniary losses resulting from the default, and (3) provide adequate assurance of future performance. In this context, a trustee could try to negotiate more favorable terms than under Amtrak’s existing contracts and leases. However, the availability of cash for the costs associated with contracts and leases would again be a critical element in the trustee’s decisionmaking. Although payments on assumed contracts or leases would be expenses of the estate, payments due on rejected contracts and leases, as well as any damages and penalties, would give rise to general unsecured claims. In addition, the trustee would have to decide whether to avoid—that is, set aside—certain transactions between Amtrak and its creditors. Generally, the trustee could set aside Amtrak’s transfers of money or property for preexisting debts made within 90 days of the bankruptcy petition, as long as Amtrak was insolvent at the time of the transfer and the creditor received more as a result of the transfer than it would receive in a bankruptcy proceeding. However, the trustee would not have unlimited authority in this area. For example, the trustee could not set aside a transfer that was intended by Amtrak and a creditor to be a contemporaneous exchange for new value and that was in fact a substantially contemporaneous exchange. Although the trustee would have considerable authority over Amtrak’s operations and financial commitments, neither the trustee nor the court could unilaterally impose changes in the wages or working conditions of Amtrak’s employees who are covered by collective bargaining agreements. The employees could voluntarily agree to such changes, perhaps in an effort to avoid or forestall liquidation. Otherwise, the trustee would have to seek changes in wages and working conditions by following procedures specified in the Railway Labor Act, including those for notice, mediation, and binding arbitration with the consent of the parties. Perhaps the trustee’s most significant responsibility would be to develop a plan of reorganization. The provisions of chapter 11 applicable to reorganization plans would, for the most part, apply to Amtrak. Therefore, among other things, a reorganization plan would have to (1) designate classes of claims (other than certain priority claims) and interests; (2) specify the unimpaired classes of claims or interests; (3) explain how the plan would treat impaired classes of claims or interests; and (4) provide adequate means for its implementation. Furthermore, the plan would have to indicate whether and how rail service would be continued or terminated, and could provide for the transfer or abandonment of operating lines. Notably, the trustee could propose a plan to liquidate all or substantially all of Amtrak’s assets. Certain unsecured claims would have to be accorded priority in an Amtrak reorganization plan, as in any corporate reorganization plan. For example, administrative claims, such as those for postpetition expenses of the estate and reasonable compensation for the trustee and professionals engaged by the trustee, would have to be paid in full on the effective date of the plan, unless the holder of a claim agreed to an alternative arrangement. Other priority unsecured claims, such as those for wages and contributions to employee benefit plans, would also have to be paid in full on the effective date of the plan, unless each class of claimants accepted a plan providing for deferred payments. In addition, under Bankruptcy Code provisions specifically applicable to railroads, claims for personal injury or wrongful death arising out of Amtrak’s operations, either before or after the filing of a bankruptcy petition, would have to be treated as administrative claims. Furthermore, certain trade claims arising no more than 6 months prior to the bankruptcy petition would also have priority. Finally, the court could require the payment of amounts due other railroads for the shared use of lines or cars, known as “interline service.” After full disclosure of its contents, Amtrak’s creditors and shareholders would vote on the plan of reorganization. Because the United States is a creditor and stockholder of Amtrak, the Secretary of the Treasury would accept or reject the plan on behalf of the United States. According to the Federal Railroad Administration, the Attorney General and the Secretary of Transportation would be consulted. However, a plan of reorganization could not be implemented unless confirmed by the court. To confirm the plan, the court would have to find, among other things, either that each class of impaired claims or interests had accepted it or that the plan did not discriminate unfairly, and was fair and equitable, with respect to each class of impaired claims or interests that had not accepted it. In addition, under provisions of the Bankruptcy Code specifically applicable to railroad cases, the court would have to find that each Amtrak creditor or shareholder would receive or retain no less under the plan than it would receive or retain if all of Amtrak’s operating lines were sold and the proceeds of such sale, and other estate property, were distributed under a chapter 7 liquidation. Finally, the court would have to find that Amtrak’s prospective earnings would adequately cover any fixed charges, and that the plan was consistent with the public interest. If more than one reorganization plan met these requirements, the court would be required to confirm the plan most likely to maintain adequate rail service in the public interest. Following confirmation of a reorganization plan, Amtrak would be discharged from its debts. If an Amtrak reorganization plan were not confirmed within 5 years of the bankruptcy petition, the court would have to order liquidation. However, the court could order liquidation earlier, upon the request of a party in interest, after notice and hearing, if it determined liquidation to be in the public interest. Under such circumstances, the trustee would distribute the assets of the estate as though the case were a liquidation under chapter 7. Because the case would not be converted to a proceeding under chapter 7, relevant provisions of chapter 11 applicable to railroads would continue to apply. In a liquidation, the trustee would turn over collateral or make payments to the proper secured creditors, convert remaining property to cash, and distribute the proceeds to the unsecured creditors in accordance with the distribution scheme contained in chapter 7. Proceeds would be distributed in the following order: priority unsecured claims, including those discussed above, in specified order; general unsecured claims, timely and tardily filed; fines, penalties, and damages that are not compensation for pecuniary loss; and postpetition interest on claims previously paid. Claims of a higher priority would have to be provided for before claims of a lower priority. In addition, in most cases, if the holders of claims in a class could not be paid in full, claims would have to be paid on a pro rata basis.
The National Railroad Passenger Corporation (Amtrak), the nation's intercity passenger rail operator, was created by Congress in 1970 after the nation's railroads found passenger service to be unprofitable. It is a private corporation. Its financial situation has never been strong, and it has been on the edge of bankruptcy several times. Early this year, Amtrak stated that federal financial assistance would have to more than double for the corporation to survive. Given Amtrak's worsening financial condition and the potential for intercity passenger rail to play a larger role in the nation's transportation system, there is growing agreement that the mission, funding, and structure of the current approach to providing intercity passenger rail merits reexamination. If Amtrak had been liquidated on December 31, 2001, secured creditors and unsecured creditors--including the federal government and Amtrak employees--and stockholders would have had $44 billion in potential claims against and ownership interests in Amtrak's estate. It is unlikely that secured and unsecured creditors' claims would have been fully satisfied, because Amtrak's assets available to satisfy these claims and interests are old, have little value, or appear unlikely to have a value equal to the claims against them. An Amtrak liquidation would have adversely affected participants in the railroad retirement and unemployment systems. If all the Amtrak employees had lost their jobs on December 31, 2001, and were not reemployed in the railroad industry, the railroad retirement system would have lost over $400 million in annual contributions from Amtrak payroll taxes. The Railroad Retirement Board estimated that the railroad retirement account would being to decline in 2006 and would be in a deficit by 2024 if no actions were taken to increase payroll taxes or reduce benefits. The financial impact on the railroad unemployment system would have been immediate, but short term. According to the Board, the unemployment account would have been exhausted in 2002, the unemployment account would have had to borrow $338 million from the railroad retirement account, and unemployment taxes would have had to increase from 4 percent to 12.5 percent between 2002 and 2004 for the system to maintain its financial health.
A U.S. government–funded enterprise fund is an organization that is designed to promote the expansion of the private sector in developing and transitioning countries by providing financing and technical assistance to locally owned small and medium-sized enterprises. The U.S. government provides initial capital to an enterprise fund through a grant; the fund may then seek additional capital from the private sector to invest alongside the enterprise fund. Enterprise funds are modeled on investment management in the venture capital industry, in which venture capital is invested primarily in small companies during early stages of their development with the investors monitoring, advising, and following up on operational results. It is expected that some investments will fail, but successful ventures are intended to offset the losses over the long term. The U.S. government initially funded enterprise funds in the early 1990s to promote the development of the private sector in Eastern and Central European countries following the breakup of the former Soviet Union in December 1991. USAID invested $1.2 billion to establish 10 enterprise funds, covering 19 countries in Central and Eastern Europe and the former Soviet Union. In September 2013, USAID issued a lessons- learned report that documented the successes and challenges faced by the Eastern and Central European enterprise funds.concluded that while enterprise funds have demonstrated that they can be a successful tool in achieving positive financial returns and developmental objectives, results to date have been mixed, based upon the economic and political environment in which they operate along with the overall investment strategy and the specific investment decisions made by each fund’s board and management team. The report also stated that, in many cases, the enterprise funds in Europe and Eurasia took up to 2 years before they were ready to make their first investments. In early 2011, the events characterized as the Arab Spring renewed interest in the potential use of the enterprise fund model in the Middle East region as well as in other countries undergoing economic and political transition. EAEF and TAEF were thus modeled after the enterprise funds in Eastern and Central Europe. EAEF was incorporated in October 2012 and funded in March 2013, when the grant agreement between USAID and EAEF was signed. TAEF was incorporated in February 2013 and funded in July 2013, when the grant agreement between USAID and TAEF was signed. The Funds’ authorizing legislation allows them to achieve their goals through the use of loans, microloans, equity investments, insurance, guarantees, grants, feasibility studies, technical assistance, training for businesses receiving investment capital, and other measures. The Funds have a dual mandate, or “double bottom line,” in that they are intended to achieve a positive return on investment while also achieving a positive development effect. The authority of the Funds to provide assistance expires on December 31, 2025. The Funds are established as nonprofit corporations that do not have shareholders and do not distribute dividends. The authorizing legislation states that each Fund shall have a board of directors that is composed of six private U.S. citizens and three private host-country citizens. The authorizing legislation further requires that board members have international business careers and demonstrated expertise in international and emerging markets investment activities. According to a September 2013 lessons-learned report by USAID on past enterprise funds, identifying and recruiting the most experienced individuals to serve on the fund’s board of directors is the single most important element in achieving the fund’s long-term development goals and financial profitability. U.S. board members serve on a volunteer basis, while the Egyptian and Tunisian citizen board members are permitted to receive compensation for their time and services. The Funds’ boards are responsible for establishing their own operating and investment policies and directing their corporate affairs in accordance with applicable law and the grant agreements. EAEF has not made any investments in Egypt, as its first investment, to purchase an Egyptian bank, did not come to fruition. EAEF’s investment strategy had been to purchase a bank that would lend money to small and medium-sized enterprises in Egypt. According to the EAEF Chairman, EAEF envisioned that it would have a greater impact on the Egyptian economy by making one large investment rather than a series of smaller investments. In August 2013, EAEF made plans to purchase a small bank in Egypt and subsequently conducted due diligence on the bank by hiring a large U.S. accounting firm to review the bank’s financial situation, among other things. In June 2014, the EAEF Board of Directors approved a decision to acquire the bank. However, according to the EAEF Chairman, the Egyptian Central Bank rejected EAEF’s application to purchase the bank. As of December 2014, EAEF was considering other investment options. According to EAEF officials, the Fund is now conducting due diligence on potential investments in the food and beverage, healthcare, and consumer finance sectors. The Chairman stated that he anticipates investing $60 million to $90 million in these three areas. Additionally, the EAEF Chairman told us that EAEF plans to consider investments in firms varying in size from SMEs to larger firms. USAID has obligated $120 million to EAEF, of which approximately $588,000 has been disbursed. Costs associated with performing the due diligence review constituted the majority of EAEF’s expenditures through 2014. Specific categories of EAEF’s expenditures include professional (e.g., legal) fees and travel expenses. Thus far, EAEF has spent less on administrative expenses than the approximately $3 million estimated for the first year in its preliminary budget. USAID has obligated $60 million to TAEF, of which TAEF has disbursed approximately $1.6 million, for administrative expenses and investments. TAEF plans to promote private sector development in Tunisia by investing in (1) a private equity fund that supports SMEs, (2) direct investments in SMEs smaller than those targeted by the private equity fund, (3) microfinance institutions, and (4) start-ups. In 2013, TAEF established a subsidiary company in Tunisia—the TAEF Advisory Company—that directly oversees TAEF’s efforts in these four areas. In June 2014, TAEF committed to its first investment of over $2.4 million in a private equity fund that invests in SMEs in a variety of industries, such as telecommunications, agribusiness, and renewable energy. TAEF is one of several investors in the private equity fund; other investors include foreign donors. According to the TAEF Chairman, aggregate investments in the Fund from all sources total approximately $20 million. TAEF officials told us that the Fund will have representation on the equity fund’s advisory committee. According to TAEF officials, the Fund has not yet made any investments in the remaining areas of direct investments in SMEs smaller than those targeted by the private equity fund, microfinance institutions, and start- ups. According to the TAEF Chairman, TAEF is in the process of conducting due diligence on two microfinance entities. Thus far, TAEF has spent less on administrative expenses than the approximately $900,000 estimated for the first year in its preliminary budget. Since their inception, EAEF and TAEF have made progress in establishing key administrative infrastructures necessary to support their investment operations. The Committee of Sponsoring Organizations of the Treadway Commission’s (COSO) 2013 internal control evaluation tool establishes a framework for assessing management structures. As shown in table 1, EAEF and TAEF have made progress in establishing structures for administrative infrastructure, corporate governance, internal control, and human capital management in line with key elements of the COSO framework. Administrative infrastructure Administrative infrastructure refers to the basic systems and resources needed to set up and support organizations’ operations— which also contribute to developing a culture of accountability and control. Since being funded in 2013, EAEF and TAEF have focused on establishing essential administrative infrastructures. EAEF set up its headquarters in New York City, New York. In July 2014, EAEF hired its first employee to occupy the position of Chief of Staff and Director of Policy Planning. According to the EAEF Chairman, EAEF plans to hire an investment manager and a chief financial officer in the future. TAEF has a U.S. office located in Washington, D.C., and a Tunisian office located in Tunis, Tunisia, both of which are led by a managing director. TAEF plans to hire two investment officers in the future. EAEF and TAEF administrative expenses thus far have mostly consisted of professional fees (e.g., expenses for legal and consulting services), travel expenses, and so forth. Corporate governance Corporate governance can be viewed as the formation and execution of collective policies and oversight mechanisms to establish and maintain a sustainable and accountable organization while achieving its mission and demonstrating stewardship over its resources. Generally, an organization’s board of directors has a key role in corporate governance through its oversight of executive management; corporate strategies; and risk management, audit, and assurance processes. The Funds have established bylaws and other rules for corporate governance. The bylaws cover the purpose of the Funds, voting rules, and the duties and responsibilities of corporate officers. The boards of both Funds have met regularly since their inceptions. In addition, the Funds have established corporate policies and procedures, which USAID has approved. In November 2014, the EAEF Board of Directors established several committees, including an investment committee, a governance and nominating committee, an external relations committee, and an audit committee. EAEF and TAEF each have to fill two vacant board member positions, one for a U.S. citizen and the other for a host country citizen. EAEF and TAEF are currently considering potential candidates to fill the vacant positions. EAEF and TAEF have established a variety of internal controls in the areas of control environment, risk assessment, control activities, information and communication, and monitoring, with additional actions under way. Internal control Internal control provides reasonable assurance that key management objectives— efficiency and effectiveness of operations, reliability of financial reporting, and compliance with applicable laws and regulations—are being achieved. Areas of internal control include control environment, risk assessment, control activities, information and communication, and monitoring. Control environment. The Funds have established directives on ethical business practices and detailed conflict-of-interest policies. In addition, each Fund has a policy on disciplinary sanctions that states that any violation of the Fund’s laws or ethical guidelines could subject an individual to potential disciplinary sanctions, such as probation or reduction in pay. Risk assessment. EAEF conducted a due diligence review for its first potential investment, the purchase of a bank. Among other things, EAEF hired a large accounting firm to review a sample of the bank’s loans. TAEF established due diligence procedures in which it examined the governance, financial, operations, and legal status of its first investment. Before funding its first investment, TAEF carried out its due diligence procedures and determined that there were no significant issues (e.g., financial or legal issues) that would impede TAEF from making the investment. The meeting minutes of the board investment committee indicate that the board discussed the results of the due diligence assessment, including the extent of risk involved, and that the board unanimously approved the fund’s first investment. Control activities. EAEF and TAEF have established several financial and cash management–related controls, including the following: Financial statements will be prepared on a quarterly basis and sent to the audit committees of the board of directors to review the performance of the Funds on a timely basis. Each Fund will, to the extent practicable, prepare an annual budget detailing its estimated operational requirements. The budget will be approved by the president and audit committee of the board of directors before the beginning of the Fund’s fiscal year (January 1).financial reports that compare the actual results to the budgeted amounts. Quarterly, the board of directors will receive Expenses in excess of a certain amount must be approved in advance by the Chairman of the Board or the President (or their designees) and one other Director. All available periodic financial statements and (if prepared) audits for all entities in which the Fund has invested shall also be maintained for audit review and project monitoring. Information and communication. EAEF and TAEF corporate policies state that each Fund will maintain an investment database that lists all of its investments and will include information such as company name, amount of investment, and industry. The Funds have met with several external organizations to discuss their mission and activities, including U.S. government agencies, foreign governments, international organizations, and host country businesses. Monitoring. EAEF and TAEF have reported to external parties, including Congress, USAID, and the public, on their use of resources, with additional accountability actions under way. For example, both Funds submitted reports to Congress that detailed their administrative expenses for 2013, and both Funds have submitted quarterly financial reports to USAID for its review. With regard to performance planning and reporting, EAEF officials said that the Fund is in the process of developing its required performance monitoring plan. In November 2014, TAEF developed a solicitation for firms based in Tunisia to develop its performance monitoring plan. In terms of audits, the Funds are responsible for appointing independent certified or licensed public accountants, approved by USAID, to complete annual audits of the Fund’s financial statements. According to the grant agreements, the audits will be conducted within the scope of U.S. generally accepted auditing standards. According to USAID officials, the Funds plan to have their 2013 and 2014 financial statements audited. Human capital management Cornerstones of human capital management include leadership; acquiring, developing, and retaining talent; and building a results- oriented culture. The Funds are meeting their initial human capital needs through hiring of a limited number of personnel to occupy key positions, such as a managing director. According to the EAEF and TAEF Chairmen, they envision their organizations as having a small number of personnel. Accordingly, both Funds have recruited a limited number of employees to support their administrative operations and initial investment planning. Specifically, EAEF has hired one employee as its Chief of Staff and Director of Policy Planning. TAEF has hired three employees to include a Managing Director based in Washington, D.C.; a Chief Operating Officer and Managing Director based in Tunis, Tunisia; and an Executive Assistant based in Tunis. The Funds took steps to recruit and hire their initial staff, such as by interviewing potential candidates and reviewing their resumes. The Funds have generally outsourced their accounting and legal functions. Both Funds have created job descriptions for their employees. To build a results-oriented culture, the Funds have established guidelines for providing compensation to their employees. For example, contingent upon USAID approval of a compensation framework, the Funds may enter into bonus or incentive compensation arrangements with their employees. The EAEF and TAEF grant agreements state that the salaries and other compensation of any of the directors, officers, and employees of the Funds shall be set at reasonable levels consistent with the nonprofit and public interest nature of the Funds. EAEF hired companies to do an executive compensation study and to administer its human capital policies, including terms of recruitment, hiring, and employee benefits. While the Funds have generally met their obligations under the grant agreements, neither Fund has submitted the performance monitoring plans required under the grant agreements. USAID has also not tracked the Funds’ use of cash in a way that allows the agency to monitor whether EAEF and TAEF are spending it in a timely manner. Further, EAEF has not implemented those provisions under the grant agreement related to marking and public communications. Last, the Funds’ corporate policies do not include key vetting procedures to prevent the illicit use of funds, the presence of which was expected by USAID. EAEF and TAEF have to date generally complied with the requirements in the grant agreements. The grant agreements contain 22 discrete requirements with which each of the Funds must comply, such as submission of quarterly financial reports to USAID and annual reports to Congress on administrative expenses. As of December 2014, TAEF had fully complied with 21 of the 22 requirements, and EAEF had fully complied with 17 of the 22, as shown in table 2.submitted the required annual reports on administrative expenses. Additionally, both Funds submitted the required quarterly financial statements. EAEF and TAEF have not yet submitted performance monitoring plans as required by the grant agreements. Specifically, the grant agreements require the Funds to develop performance monitoring plans in consultation with USAID within 120 days after the grant agreement enters into force. However, as of February 2015, EAEF and TAEF performance monitoring plans were approximately 19 months and 15 months overdue, respectively. The performance monitoring plans are intended to allow external stakeholders and, for the purposes of oversight, USAID to monitor the Funds’ progress toward meeting their goals. The grant agreements also require that the performance monitoring plans include performance indicators, which must include return on investment for U.S. capital invested in Egypt and Tunisia through the Funds and the number of SMEs in Egypt and Tunisia benefitting from Fund activities. USAID and the Funds are to review the performance monitoring plans and associated indicators during the semiannual meetings with USAID to assess progress. Without performance monitoring plans, USAID and other stakeholders cannot assess progress toward agreed-upon goals and indicators during the semiannual reviews. USAID referred the Funds to monitoring and evaluation experts to assist the Funds in developing their performance monitoring plans, according to USAID officials. The EAEF and TAEF Chairmen told us that it would have been premature to submit a performance monitoring plan before finalizing investment strategies. TAEF and EAEF officials told us that they are currently seeking contractors to develop and implement performance monitoring plans. In November 2014, TAEF issued a scope of work that envisioned a performance monitoring plan being presented to USAID 60 days after the Fund had selected and engaged a contractor. According to EAEF officials, EAEF plans to submit a performance monitoring plan to USAID in early 2015. USAID’s grant agreements with EAEF and TAEF state that they may request funds for anticipated expenditures for up to a 90-day period from the date of the request. In addition, USAID guidance on advance payments states that, generally, advance payments or any portion of an advance payment not liquidated within 150 days is considered delinquent.documented rationale from the agreement officer and approved by USAID’s financial management office. EAEF and TAEF have not liquidated some of their advances within 150 days of payment, and the advances were therefore delinquent. After we shared our preliminary findings with USAID, program officials sought and obtained the necessary approvals. As of November 2014, EAEF had an outstanding balance of Any exception to this general rule must be supported by a approximately $247,000, and TAEF had an outstanding advance balance of approximately $477,000. The Funds reported their liquidation of their advance payments through quarterly financial reports that are sent only to the USAID program representative. However, USAID’s financial management office is responsible for monitoring whether the Funds’ advances are outstanding. Because USAID’s financial management office was not receiving the quarterly financial reports, it was unable to ensure that the Funds were not maintaining USAID funds in excess of their immediate disbursement needs. In commenting on a draft of this report, USAID stated that although not strictly required by agency policy, the program representative is now sharing all quarterly financial information with the financial management office to facilitate oversight. EAEF has not implemented the provisions in its grant agreement related to marking and public communications. Those provisions require the Fund to develop a logo in addition to using the USAID logo, to acknowledge USAID’s role in the provision of foreign assistance, and to use a general disclaimer in those instances where it is unable to obtain USAID’s approval in advance of a public communication. We have reported in the past that marking can raise awareness about the source of assistance with individuals who come into contact with the assistance sites or materials. According to USAID and EAEF officials, the two organizations are working together to see that the Fund implements these provisions. The grant agreements aim to prevent the contribution of U.S. funds (1) to certain individuals (e.g., individuals and organizations associated with terrorism) by conducting appropriate vetting, (2) for certain purposes (e.g., funds may not be used toward the purchase of gambling equipment), (3) to political organizations not committed to democracy, and (4) to the military of another government.direct organizations to establish control activities such as policies and procedures that enforce management directives and help ensure that Internal control standards actions are taken to address risks. We found that the Funds have accounted in their corporate policies for three out of the four prohibitions related to preventing the contribution of EAEF or TAEF funds to illicit transactions or purposes. While USAID grant agreements with the funds establish procedures designed to prevent transactions with individuals and organizations associated with terrorism, and the Chairmen of both Funds have committed to mitigate any risk of illicit use of U.S. funds, neither Funds’ corporate policies contain specific vetting provisions. Specifically, they lack provisions related to vetting potential investees and the requirement that any investee planning to lend U.S. funds in excess of $25,000 onward to another business or invest in another entity certify to the Funds that it will conduct certain due diligence activities to prevent their illicit use. While USAID approved the Funds’ corporate policies, USAID officials subsequently indicated that they expected this prohibition related to vetting potential investees and onward lending to be included in the Funds’ corporate policies. Since the Funds have made only one investment to date—TAEF’s $2.4 million investment—there has been only one instance where vetting was necessary. In commenting on a draft of this report, the TAEF Chairman emphasized that the Fund carried out all required due diligence with respect to vetting and assured itself of the appropriateness of the investee’s procedures. For example, TAEF provided us with documentation of TAEF’s efforts to screen the investee’s primary officials against the required vetting lists as well as the investee’s policy for verifying the credentials of individuals and firms. In addition, in November 2014, TAEF signed a side letter with the investee in which the investee agreed to screen all future recipients against lists of proscribed parties. Since their inception in 2013, EAEF and TAEF have been awarded $180 million by USAID and have made progress in establishing their administrative infrastructures, internal controls, corporate governance mechanisms, and investment strategies. To date, the Funds have disbursed approximately $2 million of the $180 million awarded to them and thus have a significant amount of U.S. funding available for future investments. The Funds have generally complied with the requirements in their grant agreements with USAID. For example, the Funds have submitted required financial reports to USAID and Congress. In addition, USAID and the Funds continue to take steps to improve oversight and compliance with the grant agreements. However, they have not yet completed actions to further strengthen oversight and compliance in several areas. In the area of cash management, USAID is exploring ways to ensure that it has all necessary financial information from the Funds, but it has not yet ensured that the Funds liquidate cash advances in a timely manner. In addition, while both Funds are hiring contractors to develop performance monitoring plans—for which both Funds required an extension of the original submission deadline—neither Fund has completed its performance monitoring plan. Further, EAEF has not yet complied with the provisions in the grant agreement related to public communications, such as those requiring EAEF to acknowledge the U.S. government’s financial contribution. While both Funds have demonstrated their commitment to ensuring that U.S. funds are not used for prohibited purposes, neither Fund has incorporated vetting requirements for individuals and organizations into its corporate policies. Taking steps to address these remaining items would strengthen USAID oversight and the Funds’ compliance with the grant agreements, which will be particularly important as the Funds’ investments grow in number and size. To further enhance USAID’s oversight of the Funds and to ensure the Funds fully implement the grant agreements, we recommend that the Administrator of USAID take the following four steps: 1. establish a process to better manage cash advances to the Funds, 2. make certain that the Funds comply with grant agreement requirements related to performance monitoring, 3. ensure that the Funds comply with grant agreement requirements related to public communications, and 4. ensure that the Funds’ corporate policies reflect grant agreement provisions regarding vetting requirements designed to prevent transactions with prohibited individuals and organizations. We provided a draft of this report to USAID, the Department of State (State), EAEF, and TAEF for review and comment. USAID and TAEF provided written comments, which we have reprinted in appendixes II and III, respectively. State provided technical comments, which we incorporated as appropriate. In its written comments, reprinted in appendix II, USAID concurred with our four recommendations and indicated the steps it was taking to implement each of them. Specifically, regarding our recommendation to establish a process to better manage cash advances, USAID stated that going forward the program representative would share Fund quarterly financial reports with the office of the Chief Financial Officer. In response to our recommendation pertaining to performance monitoring, USAID stated that it would work with each Fund to meet a revised deadline of the first quarter of 2015 to submit a completed performance monitoring plan. With regard to our recommendation pertaining to public communications, EAEF confirmed to USAID that it would meet all related requirements going forward, including proposing a logo in the first quarter of 2015. Lastly, the Chairmen of both Funds confirmed to USAID that they would propose amendments to their corporate policies to include the vetting procedures to their respective Boards. In its written comments, reprinted in appendix III, TAEF agreed with our findings and provided some additional information. For example, TAEF stated that the delay it requested to implement its performance monitoring plan would result in more timely and better program evaluation going forward. We are sending copies of this report to the appropriate congressional committees, State, USAID, and EAEF and TAEF. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-3149 or GootnickD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Conferees for the bill that would become the Consolidated Appropriations Act, 2012 (Pub.L. No. 112-74) requested that we examine the management and oversight of the Egyptian-American Enterprise Fund (EAEF) and the Tunisian-American Enterprise Fund (TAEF) (the Funds) to determine if appropriate and sufficient safeguards exist against financial misconduct. In this report, we examined (1) the status of EAEF’s and TAEF’s investments, (2) EAEF’s and TAEF’s progress in establishing key management structures to support their missions and operations, and (3) the extent to which EAEF and TAEF have complied with certain requirements of the USAID grant agreements. To assess the extent to which the Funds have made investments, we reviewed the Funds’ strategic planning documents and their due diligence reports. We obtained budget data from the U.S. Agency for International Development (USAID) on its obligations and disbursements to the Funds from fiscal years 2013 to 2014. We conducted an assessment of the reliability of the data by reviewing USAID’s responses to a set of data reliability questions and by interviewing USAID budget officials. We found the data to be sufficiently reliable for our purposes. In addition, we interviewed the Chairmen and senior management of EAEF and TAEF to discuss their investment strategies, plans, and investment efforts thus far. To examine what progress the Funds have made in establishing key management structures, we reviewed EAEF and TAEF documents, including the Funds’ statements of corporate policies and procedures, bylaws, employee job descriptions, organization charts, financial and annual reports, and board of director meeting minutes. We used the Committee of Sponsoring Organizations of the Treadway Commission’s (COSO) Internal Control – 2013 Integrated Framework evaluation tool as a framework for gathering information on the Funds’ management structures and assessing the extent to which they had established such Although our analysis included gaining an understanding of structures.EAEF’s and TAEF’s actions related to establishing internal control mechanisms, we did not evaluate the implementation of internal control at the Funds. We also interviewed EAEF and TAEF Chairmen and senior management to obtain information on the management structures the Funds had already established or planned to establish. To assess the extent of Fund compliance with certain grant agreement requirements, we used the EAEF and TAEF grant agreements as our primary criteria for identifying the requirements to which the Funds are subject. We identified 22 requirements that the Funds are subject to and then determined whether the Funds had met these requirements by collecting relevant USAID and Fund documentation, such as the Funds’ reports to Congress on administrative expenses. We also reviewed the Funds’ statement of corporate policies and procedures and documentation related to the Funds’ efforts to develop performance monitoring plans. In addition, we interviewed the EAEF and TAEF Chairmen and senior management about their efforts to comply with the terms and conditions of the grant agreements as well as USAID officials regarding their efforts to oversee the Funds’ compliance with the grant agreements. We also examined the process that USAID used to develop the EAEF and TAEF grant agreements, which entailed reviewing its agency policies, procedures for deviating from those policies, and the grant agreements themselves. We conducted this performance audit from March 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jason Bair (Assistant Director), R. Gifford Howland (Analyst-in-Charge), Debbie Chung, Emily Gupta, and Jeffrey Isaacs made key contributions to this report. Mark Dowling, Etana Finkler, Paul Kinney, and Steven Putansu provided additional support.
In the wake of the economic and political transitions associated with the “Arab Spring,” Congress authorized the creation of enterprise funds for Egypt and Tunisia in 2011. EAEF and TAEF aim to develop the private sectors in these countries, particularly SMEs, through instruments such as loans, equity investments, and technical assistance. USAID signed grant agreements with both Funds in 2013 and has thus far obligated $120 million to EAEF and $60 million to TAEF. In this report, GAO examines (1) the status of the Funds' investments, (2) the Funds' progress in establishing key management structures to support their missions and operations, and (3) the extent to which the Funds have complied with requirements in the grant agreements. To address these objectives, GAO reviewed USAID and Fund documents, such as EAEF and TAEF grant agreements, policies and procedures, and the Funds' boards of directors meeting minutes. GAO also interviewed USAID and Fund officials. The Egyptian-American Enterprise Fund (EAEF) has not yet made any investments in Egypt, and the Tunisian-American Enterprise Fund (TAEF) has made an over $2.4 million investment in Tunisia. EAEF has not made any investments in Egypt as its initial investment did not proceed as planned. EAEF's attempt to purchase a bank in Egypt that would lend money to small and medium-sized enterprises (SME) was rejected by the Egyptian Central Bank. EAEF is now considering other options, such as investments in the food and beverage sector. TAEF's investment strategy is to invest in four different areas: (1) a private equity fund investing in SMEs, (2) direct investments in SMEs smaller than those targeted by the private equity fund, (3) microfinance institutions, and (4) start-ups. In June 2014, TAEF made an over $2.4 million investment in a private equity fund that invests in and finances Tunisian SMEs. EAEF and TAEF (the Funds) have made progress in establishing key management structures to support their mission and operations, with additional actions under way. In terms of administrative structures, both Funds have hired initial staff. Regarding their corporate governance, EAEF and TAEF both have boards of directors that have met regularly, adopted by-laws, and developed corporate policies and procedures. Both Funds plan to develop and implement additional management structures in the future, such as audits of their 2013 and 2014 financial statements. While TAEF and EAEF have generally fulfilled the requirements of the grant agreements, GAO found three gaps in the Funds' implementation and one gap in the U.S. Agency for International Development's (USAID) implementation. First, the Funds have not yet submitted their performance monitoring plans as required by the grant agreements. Second, EAEF has not implemented the provisions in its grant agreement related to public communications, such as development of its own logo. Third, the Funds' corporate policies do not include procedures to implement vetting requirements designed to prevent illicit use of the funds, the presence of which was expected by USAID. USAID has also not tracked the Funds' use of cash in a way that allows the agency to monitor whether EAEF and TAEF are spending it in a timely manner. Collectively, these gaps in implementation pose challenges for USAID's oversight of the Funds. GAO recommends that USAID take steps to further enhance its oversight of the Funds' compliance with the grant agreements and other requirements by establishing a process to better manage cash advances to the Funds; ensuring that the Funds comply with the grant agreement requirements related to performance monitoring and public communications; and ensuring that the Funds' corporate policies include vetting requirements. USAID concurred with our recommendations.
According to the 2000 Census, approximately 588,000 Native Americans were residing on tribal lands. Tribal lands vary dramatically in size, demographics, and location. They range in size from the Navajo Nation, which consists of about 24,000 square miles, to some tribal land areas in California comprising less than 1 square mile (see figure 1). Over 176,000 Native Americans live on the Navajo reservation, while other tribal lands have fewer than 50 Native residents. The population on a majority of tribal lands is predominantly Native American, but some tribal lands have a significant percentage of nonNative Americans. In addition, while most tribal lands are located in rural or remote locations, some are located near metropolitan areas. Tribes are unique in being sovereign governments within the United States. The federal government has recognized the sovereign status of tribes since the founding of the United States. The U.S. Constitution, treaties, and other federal government actions have established tribal sovereignty. To help manage tribal affairs, tribes have formed governments or subsidiaries of tribal governments that include schools, housing, health, and other types of corporations. In addition, the Bureau of Indian Affairs (BIA) in the Department of the Interior has a fiduciary responsibility to tribes and assumes some management responsibility for all land held in trust for the benefit of the individual Native American or tribe. In Alaska, federal law directed the establishment of 12 for profit regional corporations, 1 for each geographic region comprised of Natives having a common heritage and sharing common interests, and over 200 native villages. These corporations have become the vehicle for distributing land and monetary benefits to Alaska Natives to provide a fair and just settlement of aboriginal land claims in Alaska. The Native villages are entities within the state that are recognized by BIA to receive services from the federal government. The 12 regional corporations have corresponding nonprofit arms that provide social services to the villages. Native American tribes are among the most economically distressed groups in the United States. According to the 2000 Census, about 37 percent of Native American households have incomes below the federal poverty level—more than double the rate for the U.S. population as a whole. Residents of tribal lands often lack basic infrastructure, such as water and sewer systems, and telecommunications services. According to tribal officials and government agencies, conditions on tribal lands have made successful economic development more difficult than in other parts of the country. A study done for the federal government, based on research gathered in 1999, found that the high cost and small markets associated with investment in tribal lands deter business investment. The federal government has long acknowledged the difficulties of providing basic services, such as electricity and telephone service, to rural areas of the country. The concept of universal telephone service has its origins in Section 1 of the Communications Act, which states that the Federal Communications Commission was created “for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States a rapid, efficient, nationwide, and worldwide wire and radio communication service with adequate facilities at reasonable charges….” The goal of universal service is to ensure that all U.S. residents have access to quality telephone service regardless of their household income or geographic location. The Telecommunications Act of 1996 reaffirmed the commitment to universal service and expanded it to include not just traditional telephone service but access to advanced telecommunications services (such as high-speed Internet access) for schools, libraries, and rural health care providers. A 1995 report by the Census Bureau based on 1990 census data noted that about 47 percent of Native American households on tribal lands had telephone service, compared to about 95 percent of households nationally. In June 2000, the FCC Chairman noted that the Commission’s universal service policies “had yielded a remarkable rate of telephone subscribership, above 90 percent for the nation as a whole.” However, he also noted that telephone subscribership among the rural poor was roughly 20 percent lower than the rest of the nation, while Native Americans living on tribal lands were only half as likely as other Americans to subscribe to telephone service. In August 2000, FCC identified certain categories of Americans, including Native Americans, who were having difficulty obtaining access to advanced telecommunications services. According to data from the 2000 decennial census, the rate of telephone subscribership for Native American households on tribal lands was substantially below the national rate of 97.6 percent. While this data indicates some progress since 1990, changes since then are unknown due to a lack of more current data. Additionally, the rate of Internet subscribership is unknown because no federal survey has been designed to capture this information for tribal lands. According to the 2000 decennial census, the telephone subscribership rate for Native American households on tribal lands in the lower 48 states was 68.6 percent, while for Alaska Native Villages it was 87 percent—both substantially below the national rate of 97.6 percent. Figure 2 shows the number of tribal lands within various percentile ranges of telephone subscribership for Native American households, based on our analysis of 2000 decennial census data. We have separated Alaska Native tribal lands from the tribal lands in the lower 48 states because telecommunications infrastructure in Alaska differs from that of the lower 48 states due to Alaska’s weather and terrain. The data is shown for 198 tribal lands in the lower 48 states and 131 tribal lands in Alaska. Tribal lands with fewer than 100 people are not included in the data available from the Census Bureau. In these areas, there must be at least 100 people in a specific group, including American Indian and Alaska Native tribal groupings, before data will be shown. As figure 2 shows, there was considerable variation among tribes regarding telephone subscribership rates, with some comparable or higher than the national rate but most below it—and many substantially so. We found, for example, that the Kalispel tribal land in Washington had a telephone subscribership rate of 100 percent, while the tribal lands of the Kickapoo Traditional Tribe of Texas had a rate of 34 percent. To get a better understanding of telephone subscribership rates by individual tribe and population size, we reviewed data for the 25 tribal lands with the highest number of Native American households. These 25 tribal lands represent about 65 percent of all Native American households, as shown in Census 2000 data. The lands vary greatly in the number of Native American households located on them (from about 46,000 for the Navajo Nation to about 1,100 for Fort Berthold) and in geographic size, with the Navajo Nation’s lands comprising about 24,000 square miles and the Eastern Band of Cherokee’s land comprising about 83 square miles. As shown in figure 3, the Native American household telephone subscribership rates for these most populous tribal lands were all below the national rate of 97.6 percent. Nine of the 25 tribal lands, representing about 44 percent of Native American households on tribal lands in the lower 48 states, had telephone subscribership rates at a level below 78 percent—which is about what the national rate was over 40 years ago when the 1960 decennial census was taken. The subscribership rate for the most populous tribal land—the Navajo—was only 38 percent. Because the 2000 decennial census is the most current data available on telephone subscribership rates on tribal lands, it is not known whether these rates have changed between 2000 and the present. To help improve the accuracy of the next decennial census and collect demographic, socioeconomic and housing data in a more timely way, the Census Bureau developed the American Community Survey (ACS), which includes a question on telephone service. In January 2005, the Census Bureau began sending out the ACS questionnaire to households. Annual results will be available for populations on all individual tribal lands by summer 2010, and sooner for tribal lands with populations over 20,000. This schedule is based on the time it will take to accumulate a large enough sample to produce data for areas with populations as small as 600 people. The status of Internet subscribership on tribal lands is unknown because no federal survey has been designed to track this information. Although the Census Bureau and FCC currently collect some national data on Internet subscribership, and FCC also collects some state level data, none of their survey instruments are designed to estimate Internet subscribership on tribal lands. In addition, officials of both agencies told us that to the best of their knowledge, no other federal agency collects data on Internet subscribership. Unlike telephone subscribership data, the 2000 decennial census did not collect information on Internet subscribership, nor is the Census Bureau currently collecting it on the ACS. The Census Bureau does collect some national data on Internet subscribership through the Current Population Survey (CPS). However, this monthly survey of households conducted by the Census Bureau for the Bureau of Labor Statistics is designed primarily to produce national and state estimates for characteristics of the labor force. To obtain national and state estimates on Internet subscribership rates, supplemental questions on Internet and computer use have been added to the CPS questionnaire. According to a Department of Commerce report, based on October 2003 CPS data, the Internet subscribership rate for U.S. households was about 55 percent. However, Commerce Department officials told us that the CPS sample cannot provide reliable estimates of Internet subscribership on tribal lands because there are not enough tribal land households in the sample to provide a reliable measure. FCC collects data on the deployment of advanced telecommunications capability in the United States, but this data cannot be used to determine Internet subscribership rates for tribal lands. Pursuant to section 706 of the Telecommunications Act of 1996, FCC is required to conduct regular inquiries concerning the availability of advanced telecommunications capability for all Americans. To fulfill its mandate, FCC has issued four reports, starting in January 1999, on the availability of advanced telecommunications capability in the United States. To obtain data for these reports, FCC requires service providers to report the total number of high-speed lines (or wireless channels), broken down by type of technology, for each state. For each of the technology subtotals, providers also report additional detail concerning the percentage of lines that are connected to residential users and a list of the zip codes where they have at least one customer of high-speed service. Because the providers are not required to report the total number of residential subscribers in each zip code to whom they provide high-speed service, and because tribal lands do not necessarily correspond to zip codes, this data cannot be used to determine the number of residential Internet subscribers on tribal lands. Finally, data on the availability of “dial-up” Internet access is not provided in these reports for any areas in the country because it is not considered an advanced telecommunications capability. The FCC has acknowledged that the zip code data present an elementary view of where high-speed Internet service subscribers are located. In particular, its data collection method does not fully describe some segments of the population, such as Native Americans residing on tribal lands. FCC has recognized that its section 706 data collection efforts in rural and underserved areas need improvement to better fulfill Congress’ mandate. Without current subscribership data, it is difficult to assess progress or the impact of federal programs to improve telecommunications on tribal lands. In a September 2004 letter to the Census Bureau, the FCC Chairman at that time stated that in order to better implement section 706 of the Telecommunications Act, FCC needs to know the rate of Internet subscribership, and particularly, the rate of Internet subscribership in smaller and more sparsely populated areas of the country, that would include tribal lands. Given the limitations of the current Census Bureau and FCC data collection efforts, FCC requested the Census Bureau add a question to the ACS regarding Internet subscribership. The ACS is designed to collect information for communities across the country, including small geographic areas such as small towns, tribal lands, and rural areas. Both FCC and Census Bureau officials told us that if a question is added to the ACS, it would provide Internet subscribership data for the nation and smaller geographic areas. An FCC official also noted that a comparative survey like the ACS, one that shows the differences of Internet subscribership between tribal lands and other geographic areas, is far more valuable than a survey that only collects Internet subscribership data on tribal lands. Census Bureau officials mentioned to us, however, that there are several methodological issues related to making changes to the ACS. Because adding questions would lengthen the ACS and could result in a reduced response rate, the Census Bureau’s current practice is to add a question to the ACS only if it is mandated by law. They told us that section 706 of the Telecommunications Act mandates that FCC, not the Census Bureau, be responsible for collecting data on advanced telecommunications. Therefore, Congress would need to pass legislation mandating that the Census Bureau collect Internet subscribership data. FCC officials told us that currently it is not clear whether FCC will pursue collection of Internet subscribership data. The Department of Agriculture’s Rural Utilities Service (RUS) and FCC are responsible for several programs designed to improve the nation’s telecommunications infrastructure and make services affordable for all consumers. RUS’s programs focus on rural telecommunications development, while FCC’s universal service programs focus on providing support for areas where the cost of providing service is high, as well as for low-income consumers, schools, libraries, and rural health care facilities. All of these general programs can benefit tribal lands and Native American consumers. In addition, FCC has recognized the need to make special efforts to improve tribal telecommunications by establishing additional support programs specifically aimed at benefiting tribal lands and their residents. Issues have arisen, however, over some aspects of how eligibility for FCC’s universal service programs is determined with regard to tribal lands. Federal efforts to expand telephone service in underserved areas date back to 1949 when the Rural Electrification Administration was authorized to loan monies to furnish and improve the availability of telephone service in rural areas throughout the United States. In 1994, RUS replaced the Rural Electrification Administration. RUS programs provide support to improve rural telecommunications infrastructure through grants, loans, and loan guarantees. Eligible participants in the RUS grant, loan, and loan guarantee programs include federally recognized tribes. The RUS grant, loan, and loan guarantee programs can be used to improve telecommunications infrastructure in rural areas, which include many of the tribal lands. Tables 1 and 2 provide a summary listing of these grant and loan programs and eligible participants, along with recent funding levels. FCC also has several general programs to support improved telecommunications services. FCC’s universal service programs support the longstanding goal of making communications services available “so far as possible, to all the people of the United States.” The universal service programs put in place in the 1980s focused on making telephone service affordable for low-income consumers and areas where the cost of providing service was high. The Telecommunications Act of 1996 extended the scope of federal universal service support to make advanced telecommunications services (such as high-speed Internet access) available to eligible public and nonprofit elementary and secondary schools, libraries, and nonprofit rural health care providers at discounted rates. Universal service program operations are carried out by a not-for-profit corporation, the Universal Service Administrative Company (USAC), under FCC’s rules and oversight. Table 3 lists key FCC universal service programs and recent funding levels that could be used to improve service on tribal lands in areas where the cost of providing service is high; lower the cost of service to low-income individuals; and support telecommunications services for local schools, libraries, and rural health care centers. In addition to financial assistance, RUS and FCC’s Wireless Telecommunications Bureau established the VISION program in 2004 as a joint policy initiative to provide technical assistance to improve the provision of wireless broadband service in rural communities. VISION is part of a larger Rural Wireless Outreach Initiative between RUS, FCC’s Wireless Telecommunications Bureau, and private industry, that is intended to coordinate activities and information on financial and other assistance regarding telecommunications opportunities for rural communities. The program is designed to provide rural communities within the United States and its territories with on-site regulatory, legal, engineering, and technical assistance to identify barriers and solutions to providing wireless broadband services to these communities. Thirteen tribal organizations have applied for assistance from this program, though no awards had been made as of October 2005. The General Services Administration’s (GSA) Federal Technology Service (FTS) 2001 contract provides telecommunications services to federal agencies, the District of Columbia government, tribal governments, and insular governments such as American Samoa, at discounted prices. Several tribes, such as the Oneida Tribe of Indians of Wisconsin and the Choctaw Nation of Oklahoma, have made use of the FTS 2001 contract to improve the telecommunications infrastructure on their lands. Beginning in June 2000, FCC established additional support to improve telecommunications infrastructure deployment and subscribership on tribal lands. FCC took this step in recognition that Native American communities have, on average, the lowest reported telephone subscribership levels in the country. FCC’s Enhanced Link-Up and Lifeline programs, which began in 2000, provide additional discounts on the cost of telephone service for tribal and nontribal residents of tribal lands who have incomes at or below 135 percent of the Federal Poverty Guidelines or who participate in one of several federal assistance programs, such as food stamps or Medicaid. Enhanced Link-Up provides qualified participants with one-time discounts of up to 100 dollars on installation fees. Enhanced Lifeline provides ongoing discounts on basic local telephone service that enable some qualified participants to pay as little as 1 dollar a month. As with FCC’s other universal service programs, the service providers are reimbursed from FCC’s universal service fund for the discounts they give to the programs’ participants. Tables 4 and 5 list the number of Enhanced Link-Up and Lifeline participants (both Native American and nonnative American residents of tribal lands) and the amount of support distributed between June 2000 and December 2004. At present, service providers file quarterly data forms with USAC that are used in reimbursing them for the discounts they give to their subscribers through the Link-Up and Lifeline programs. This data can be broken out by state, but not by tribal land, because the reporting form does not ask service providers to indicate the number of participants and amount of funding by tribal land. State-level data, however, has limited use in measuring the performance of these programs with respect to individual tribal lands. Nearly all the states containing tribal lands have more than one of them, as shown earlier in figure 1, so their data is a sum total of multiple tribal lands. Moreover, some tribal lands extend across state lines. The Navajo Nation’s land, for instance, crosses the borders of Arizona, New Mexico, and Utah; and the Standing Rock Sioux’s tribal land crosses the borders of North and South Dakota. Consequently, the participation and funding data relevant to these tribal lands (and others like them) are split among the data of multiple states. Because FCC does not have data on program participation and funding by individual tribal land, some basic questions cannot be answered: what percentage of residents of particular tribal lands are benefiting from the programs and how have the participation rates on individual tribal lands changed over time? At one point, FCC took steps to obtain more detailed program data. When the Enhanced Link-Up and Lifeline programs were established in 2000, the Commission directed one of its bureaus to revise, as necessary, the form used by service providers for the general Link-Up and Lifeline programs already in operation. In June 2003, FCC sought comment on changes to its Lifeline program, including the collection of additional data, and made revisions to the form. In December 2003, FCC received approval from the Office of Management and Budget for the revised form, which included requiring service providers to list the number of their Enhanced Lifeline subscribers by individual tribal land. However, in spring 2004, some service providers met with FCC officials to voice concerns that the collection of such information would be difficult to implement into their billing systems, but did not provide specific cost estimates for its implementation. In March 2005, FCC indefinitely suspended the use of the revised form due to these concerns. FCC’s Tribal Land Bidding Credit program is designed to provide incentives for wireless providers to deploy wireless services across tribal lands. FCC is authorized to auction radiofrequency spectrum to be used for the provision of wireless services in the United States. Under the Tribal Land Bidding Credit program, FCC reduces the cost of a radiofrequency spectrum license to a winning bidder in a spectrum auction if the bidder agrees to deploy facilities and provide telecommunications service to qualifying tribal lands. The agreement includes constructing and operating a wireless system that offers service to at least 75 percent of the population of the tribal land area covered by the credit within 3 years of the grant of the license. Tribal lands with telephone subscribership below 85 percent are eligible for the program. The program began in 2000, with the first credits awarded in 2003. In total, the program has awarded credits to six licensees who have pledged to deploy facilities and provide telecommunications services to 10 tribal lands. Most of the credits to date have been awarded to two licensees for providing service on three tribal lands. Table 6 lists the dollar value of tribal land bidding credits awarded through April 2005. At present, it is unclear what the program’s long-term impact will be in creating a significant incentive to deploy wireless service on tribal lands. FCC has acknowledged that the program is underutilized by service providers, attributing this to economic and technical factors. Several industry and tribal stakeholders expressed concerns that the program has a limited ability to improve service on tribal lands. These stakeholders stated that the main problem with the program is that tribal land bidding credits deal with the least expensive cost element of providing wireless service to tribal lands: the spectrum license. In fact, they said that spectrum to serve tribal lands can be acquired more economically through spectrum leasing arrangements with other licensees than through the Tribal Land Bidding Credit program. In their view, the main barrier to deploying wireless service on tribal lands is the high cost of network infrastructure, such as cellular towers. During 2006, FCC will have an opportunity to begin reviewing the actual effect of the program. By then, licensees who received Tribal Land Bidding Credits in 2003 are supposed to have met the requirement to cover 75 percent of the tribal land area for which their credit was awarded. In spring 2002, FCC established the Indian Telecommunications Initiative (ITI) to provide assistance to improve telecommunications services on tribal lands. The Initiative’s strategic goals are to improve tribal lands’ telephone subscribership rates, increase the telecommunications infrastructure, and inform consumers about the financial support available through federal programs, such as the universal service programs. ITI also seeks to promote understanding, cooperation, and trust among tribes, government agencies, and the telecommunications industry to address telecommunications issues facing tribal lands. Since its inception, ITI has organized several informational workshops to provide tribes and tribal organizations with information about federal telecommunications programs such as Enhanced Lifeline and Link-Up. ITI has also used these workshops to disseminate information about FCC rules and policies that affect the deployment of telecommunications services on tribal lands, such as cellular tower siting procedures. FCC senior officials and other staff also attend and participate in a variety of meetings on telecommunications issues with tribal officials. FCC has also distributed educational materials to tribes and tribal organizations about its universal service programs and other issues of interest. The implementation of universal service programs is largely the joint responsibility of federal and state government. However, the sovereign status of tribes raises unique issues and concerns. Service providers, tribal officials, and others have cited two specific areas of concern. One involves FCC’s process to determine whether the FCC has jurisdiction to designate service providers as eligible to receive universal funds for serving tribal lands. A second is related to the statutory limitations of tying the eligibility for universal service funding under the E-rate program for tribal libraries to state Library Services and Technology Act funds. Some stakeholders we spoke with emphasized that deployment of services on tribal lands, particularly by wireless carriers, might be improved if FCC had a more timely process for determining its jurisdiction to designate a provider wanting to serve tribal lands as an Eligible Telecommunications Carrier (ETC). As defined by the Communications Act, service providers must be designated as an ETC in order to participate in FCC’s universal service programs. The Act gives the individual states the primary responsibility for designating ETCs. Initially, the Act made no provision for cases where a service provider might not be subject to state jurisdiction, such as those operating on tribal lands. In 1997, Congress amended the Act by requiring FCC to determine a service provider’s eligibility to receive federal universal service funds in cases where a state lacks jurisdiction to make an ETC determination. In response, FCC developed a process by which a service provider seeking ETC status for serving a tribal land may petition the Commission to determine whether the provider is subject to the state commission’s jurisdiction. If the FCC finds that the state does not have jurisdiction, FCC can make the ETC determination. To date, FCC has received ten applications for ETC designations involving tribal lands. Six of the applications were from tribally-owned wireline service providers, and four were from non-tribally-owned wireless service providers. FCC provided the tribally-owned wireline providers with ETC status within a few months of their application. Two different non-tribally owned wireless service providers petitioned FCC for ETC designation on three separate tribal lands. As indicated in table 7, FCC granted one of these three petitions in 10 months. Another was withdrawn by the provider after more than three years with no FCC decision, while the third has been pending at FCC for more than 3 years. FCC has noted that determining whether a state or FCC has ETC jurisdiction regarding a tribal land is “a legally complex and fact specific inquiry, informed by the principles of tribal sovereignty, federal Indian law, treaties, as well as state law.” When we asked about the long timeframes involved with the first and third items in table 7, FCC officials explained that they must conduct a case-specific inquiry for each application to determine whether the Commission has the authority to make an ETC designation. In its 2001 Western Wireless decision, FCC noted that it would resolve the Western Wireless ETC decision in light of the guidance provided by the Supreme Court in Montana v. United States, 450 U.S. 544 (1981). This case sets out the guiding principle that Indian tribes lack jurisdiction to regulate nonmembers on the reservation, but it recognized two exceptions. Applying this framework to the service agreement between the Oglala Sioux Tribe and Western Wireless, FCC granted Western Wireless ETC status over its service to tribal members living within the Pine Ridge reservation. FCC has not issued any further guidance on how it will make its ETC decisions on tribal lands. FCC officials told us that the information needed to make a determination may change from application to application. They said that they try to complete these designations in a timely fashion, but applicants may not provide sufficient information, and staff normally dedicated to these issues may need to focus on other issues facing FCC. In 2000, FCC sought public comment on the creation of a 6-month timeline for the resolution of jurisdictional issues surrounding an ETC designation on tribal lands. However, in 2003 FCC formally decided against creating this timeline because determining FCC’s jurisdiction over ETC designation on tribal lands “is a legally complex inquiry that may require additional time to fully address.” Some tribal officials we spoke with emphasized the importance of tribal libraries as a means for members to have Internet access and expressed concern about their difficulty in obtaining E-rate funding for their libraries. Under current eligibility requirements, tribal libraries can apply for universal service fund support through the E-rate program provided they meet eligibility requirements. The Communications Act defines E-rate eligible libraries as those eligible for assistance from a state library administrative agency under the Library Services and Technology Act (LSTA), which provides federal grant funds to support and develop library services in the United States. LSTA has two types of library grants that primarily relate to governmental entities: one for states and one for federally recognized tribes and organizations that primarily serve and represent Native Hawaiians.To be eligible for E-rate funds, a tribal library must be eligible for state LSTA funds and not just tribal LSTA funds. The eligibility criterion has practical implications for tribal libraries. Although we did not survey all the states on this issue, officials in two states told us that their state laws preclude tribal libraries within their states from being eligible to receive state LSTA funds, which has the effect of making them ineligible to receive E-rate funds. Officials in Oklahoma said that only county and city libraries are eligible for state funding such as LSTA monies. Tribal libraries are not county or city libraries and therefore not eligible for Oklahoma’s state LSTA funds. One former tribal librarian in Oklahoma told us that she did not apply for E-rate funding because the state library administrative agency provided her with documentation indicating that the tribe was not eligible for state LSTA funds. Montana officials told us that their state law also has similar limitations regarding tribal libraries’ eligibility for state LSTA funds. The eligibility criterion also has practical implications for the E-rate program. Libraries applying for LSTA funds must self-certify their eligibility. As part of its integrity process, USAC requires a third party verification of the eligibility requirement. Thus, USAC verifies a library’s eligibility for E-rate funds by asking state library administrative agencies to provide written certification of a library’s eligibility for state LSTA funds. This process has prompted a number of comments from several of those we interviewed. Some tribal and state library agency officials noted that the current eligibility criterion infringes on tribal sovereignty by involving the state in tribal library E-rate funding. One state librarian, for example, expressed discomfort at being put in the position of acting on behalf of a sovereign tribe and expressed the strong belief that eligibility for E-rate funding should be a matter between the tribe and USAC, without involvement by state government agencies. USAC officials told us that they have received some E-rate applications from tribal libraries. In those cases, a USAC board member successfully worked with the states in question to obtain the certifications. However, USAC officials and the USAC board member emphasized the time-consuming nature of these resolution efforts. In fall 2002, FCC, USAC, and the Institute of Museum and Library Services (IMLS) officials met to discuss possible remedies for this situation. These discussions produced a consensus that a change to the E-rate eligibility requirement for libraries defined in the Communications Act could facilitate tribal libraries’ eligibility for E-rate funding. These discussions focused on a modification to the Act that would allow tribal libraries eligible for funding from either a state library administrative agency or tribal government under the LSTA to be eligible for funding under the E-rate program. FCC officials told us that modifications to the Act would require legislative action by the Congress, because such modifications cannot be made by FCC through a Commission order or administrative proceeding. Tribal and government officials, Native American groups, service providers, and other entities we interviewed cited several barriers to improving telecommunications on tribal lands. The two barriers most often cited by officials of the tribes and Alaska regional native non-profit organizations we interviewed were the rural location and rugged terrain of tribal lands and tribes’ limited financial resources. The third most often cited barrier was a lack of technically trained tribal members to plan and implement improvements in telecommunications. A fourth barrier cited by tribal officials and other stakeholders is the complex and costly process of obtaining rights-of-way for deploying telecommunications infrastructure on tribal lands. The rural location and rugged terrain of most tribal lands and tribes’ limited financial resources were the barriers to improved telecommunications most often cited by officials of tribes and Alaska Native Villages we interviewed. These two barriers were also cited by representatives of service providers and federal agencies. These two barriers are interrelated, can deter providers from investing in infrastructure on tribal lands, and contribute to the low levels of subscribership on many tribal lands. Tribal lands are mostly rural and characterized by large land areas, rugged terrain such as mountains and canyons, low population density, and geographic isolation from metropolitan areas. Figure 4, from the Pine Ridge Indian Reservation in South Dakota, illustrates some of these characteristics. Generally, these factors make the cost of building and maintaining the infrastructure needed to provide service higher than they would be in urban settings. For example, more cable per customer is required over large, sparsely populated areas, and when those areas are mountainous, it can be more difficult and costly to install the cable. The Rural Task Force, formed by the Federal-State Joint Board on Universal Service, documented the high costs of serving rural customers in a report issued in January 2000, which stated that the average telecommunications infrastructure cost per customer for rural providers was $5,000, while the average infrastructure cost per customer for non-rural providers was $3,000. Officials from 17 tribes and 11 Alaska regional native non-profit organizations we interviewed told us that the rural location of their tribe is a telecommunications barrier. Tribes’ limited financial resources are also seen as a barrier to improving telecommunications services on tribal lands. Many tribal lands—including some of those we visited such as the Navajo, the Mescalero Apache, the Yakama and the Oglala Sioux—have poverty rates more than twice the national rate, as well as high unemployment rates. The 2000 U.S. Census showed that the per capita income for residents on tribal lands was $9,200 in 1999, less than half the U.S. per capita income of $21,600. Officials of 33 of the 38 Native American entities we interviewed told us that lack of financial resources was a barrier to improving telecommunications services. Several of these tribal officials told us that their tribal governments must use their tribes’ limited financial resources on other priorities such as water and sewer lines, housing, and public safety. In addition, high levels of poverty on many tribal lands may also make it less likely that tribal residents will subscribe to those telephone and Internet services that are available, particularly when geographic barriers have increased the costs of those services. For example, a Yakama Nation tribal official told us that many residents cannot afford a computer or Internet access; some cannot even afford telephone service. These two factors, the rural location of tribal lands (which increases the cost of installing telecommunications infrastructure) and tribes’ limited financial resources (which can make it difficult for residents and tribal governments to pay for services) can combine to deter service providers from making investments in telecommunications on tribal lands. This lack of investment can result in a lack of service, poor service quality, and little or no competition. With regard to a lack of service, an official with the Yakama Nation told us that while many tribal residents in the more heavily populated areas have access to telephone service, the tribe’s service provider has not built additional infrastructure to reach less populated areas and has no plans to do so in the near future. A representative of the company that provides service to the Coeur d’Alene tribe told us that high-speed Internet was only available in certain areas of the Coeur d’Alene tribal land, that there were no immediate plans to expand the service area, and that there were cost issues in providing service to the more remote and less densely populated parts of the reservation. Another provider’s representative told us that providing digital subscriber lines (DSL) to most parts of the Eastern Band of Cherokee’s reservation would not be profitable because the land is rugged and to connect many of those who live out in remote rural areas would require an investment that would be difficult to justify. With regard to service quality, of the 38 tribes and tribal representatives we interviewed, 9 mentioned service quality as a barrier to improved telecommunications. One tribe told us that their local provider has no local service office and few technicians, so that the company may take days to repair or respond to a problem. With regard to the lack of competition, officers of 2 tribes told us that because there is only 1 provider, they have no choice but to pay the prices being charged for services, even though they think the prices are too high. The third barrier most commonly cited by tribal representatives was the lack of tribal members trained in or knowledgeable about telecommunications technologies. Officials of 13 of the 38 Native American tribes and tribal organizations we interviewed told us that lack of telecommunications training and knowledge among tribal members is a barrier to improving their telecommunications. Some of these officials said they needed more technically trained members to plan and oversee the implementation of telecommunications improvements, as well as to manage existing systems. For example, one tribal official told us that he is currently understaffed and is running a multi-tribe wireless network with just one other person. Another tribal official told us that there is only one tribal member with formal training in telecommunications and that the tribe needs a well trained person to take charge of the tribe’s telecommunications needs. An official of the Coeur d’Alene tribe, who has technical training, told us that the tribe does not have a sufficient number of technically knowledgeable staff members to develop and maintain needed telecommunications systems. The same Coeur d’Alene tribal official also told us that tribes without technically trained staff would be at a disadvantage in negotiating with service providers. This official added that having tribal members trained in telecommunications was necessary to ensure that a tribe’s planned improvements included the equipment and technology the tribe wanted and needed. In addition, one non-tribal stakeholder mentioned that a lack of training prevented tribes from choosing appropriate technologies for their specific needs. One industry stakeholder mentioned that tribes needed a better understanding of the range and capacity of shared spectrum wireless technology so they would not be disappointed by its limitations. A 1995 Office of Technology Assessment study of telecommunications on tribal lands stated that most Native American reservations, villages, and communities would benefit from developing a plan or vision of how telecommunications could best meet their educational, health, economic development, and cultural needs. In 1999, the Department of Commerce estimated that very few tribes had telecommunications plans. Of the 38 tribes and tribal organizations we interviewed, 14 told us they have some type of technology plan and 7 more said they had a plan in development. Industry stakeholders also told us that having tribal staff knowledgeable in telecommunications policies improves the process of deploying services on tribal lands. One service provider told us that if tribes delegated telecommunications decisions to a tribal governmental committee, the company could provide service more effectively and efficiently. Instead, when a company has to bring telecommunications decisions before the full tribal council, the process can be very time consuming because the full tribal council meets infrequently and telecommunications issues are often not at the top of the agenda. Another provider told us that having staff knowledgeable in telecommunications policies and procedures, such as rights of way and contract issues, allows providers to more quickly and effectively deploy services because time is not spent negotiating over unfamiliar terms. According to several service providers and tribal officials, obtaining a right-of-way through Indian lands is a time-consuming and expensive process that can impede service providers’ deployment of telecommunications infrastructure. The right-of-way process on Indian lands is more complex than the right-of-way process for non-Indian lands because BIA must approve the application for a right-of-way across Indian lands. BIA grants or approves actions affecting title on Indian lands, so all service providers installing telecommunications infrastructure on Indian lands must work with BIA or its contractor (realty service provider) to obtain a right-of-way through Indian lands. To fulfill the requirements of federal regulations for rights-of-way over Indian lands and obtain BIA approval, service providers are required to take multiple steps and coordinate with several entities during the application process. These steps must be taken to obtain a right-of-way over individual Indian allotments as well as tribal lands. Several of the steps involve the landowner, which could be an individual landowner, multiple landowners, or the tribe, depending on the status of the land. For example, the right-of-way process requires a) written consent by the landowner to survey the land; b) an appraisal of the land needed for the right-of-way; c) negotiations with the landowner to discuss settlement terms; d) written approval by the landowner for the right-of-way; and e) BIA approval of the right-of-way application. Service providers told us that a lack of clarity in federal regulations for rights-of-way over Indian lands can also slow down the right-of-way approval process. During the right-of-way approval process, BIA has a responsibility to ensure that the right-of-way suits the purpose and size of the equipment being installed on the land. However, federal regulations do not have guidance or descriptions for advanced telecommunications infrastructure, which would assist BIA in evaluating telecommunications rights-of-way applications. According to a Department of the Interior official, descriptions and guidance for advanced telecommunications infrastructure are absent because the regulations were created prior to the advent of modern telecommunications equipment. For example, the federal regulations have guidance and descriptions for the size of the right-of-way needed and voltage levels of electrical equipment that can be installed for commercial purposes, but similar descriptions and guidance are not available for advanced telecommunications rights-of-way. According to service providers, this lack of clarity can cause grey areas for BIA when it attempts to classify the type of advanced telecommunications infrastructure the service provider intends to install and whether it is for commercial or residential purposes. This adds time to the right-of-way approval process because BIA has to determine if the regulations allow for the installation of the applicant’s infrastructure. A BIA official acknowledged that portions of the federal regulations, including the section on telecommunications infrastructure, are outdated. As a result, BIA is currently revising the regulations to better apply to modern utility technologies, including advanced telecommunications infrastructure, but timeframes for completion of this work have not been established. As mentioned above, BIA requires that service providers obtain approval from the individual landowner or the tribe for a right-of-way. Service providers told us that obtaining landowner consent for a right-of-way across an individual Indian allotment is time consuming and expensive, which can delay or deter deployment of telecommunications infrastructure on tribal lands. For example, one service provider told us that an individual Indian allotment of land can have over 200 owners, and federal regulations require the service provider to gain approval from a majority of them. The official stated that the time and cost of this process is compounded by the fact that a telecommunications service line often crosses multiple allotments. In addition, if the service provider cannot obtain consent for the right-of-way from the majority of landowners, the provider is forced to install lines that go around the allotment, which is also expensive. Several tribes are moving towards owning or developing part or all of their own telecommunications systems to address the barriers of tribal lands’ rural location and rugged terrain and tribes’ limited financial resources, which can deter service providers from investing in telecommunications on tribal lands. These tribes are using federal grants, loans, or other assistance, long-range planning, and private-sector partnerships to help improve service on their lands. In addition, some tribes have addressed these barriers by focusing on wireless technologies, which can be less costly to deploy across large distances and rugged terrain. Some tribes are addressing the shortage of technically-trained tribal members to plan and implement improvements on tribal lands through mentoring and partnerships with educational institutions. To help reduce the time and expense required to obtain a right-of-way across tribal lands, one tribe is developing a right-of-way policy to make the tribal approval process more timely and efficient. From our interviews of officials of 26 tribes and 12 Alaska regional native non-profit organizations, we found that 22 are addressing the need to improve their telecommunications services by developing or owning part or all of their own local telecommunications network. Some of those we spoke to told us that they were doing this because their provider was unwilling to invest in improved telecommunications services, in part due to the barriers of the tribe’s rural location, rugged terrain, and limited financial resources. An additional 10 tribes told us that they have considered or are considering owning part or all of their telecommunications systems. Four of the 6 tribes we visited are developing their own telecommunications systems to address the lack of investment by telecommunications companies. These tribes are addressing their limited financial resources to fund telecommunications improvements by one of three methods. Two of the 4 have obtained federal funds, another has reduced its use of services from the current provider to help fund its own system, and a fourth tribe has partnered with a local business also adversely affected by poor telecommunications service. Two of these tribes also told us that they have been able to provide better service and lower prices than the incumbent provider because they are more concerned about providing service than about making a profit. The Coeur d’Alene Tribe in Idaho is using an RUS grant to overcome its limited financial resources and develop its own high speed wireless Internet system. Tribal officials told us that the wireline service provider for the Coeur d’Alene Tribe had not deployed the necessary equipment to offer high speed Internet access to all residents on tribal lands because deploying the equipment was not profitable. (An official of the service provider told us that high speed Internet was only available in certain areas, that there were no immediate plans to expand the service area, and that there were cost issues in expanding service to the more remote and less densely populated parts of the reservation.) The tribe applied for an RUS Community Connect Broadband grant to purchase and deploy a wireless system to provide high-speed Internet access to all residents of the tribal land. This type of grant can be used for expenditures for a wide array of infrastructure and related needs, including necessary equipment that many tribal members cannot afford. For example, the grant allows for the purchase of equipment required to connect households and businesses to the wireless system, and for the construction of a community technology center for training and Internet access. The grant is being used to fund 5 towers to ensure that the wireless system reaches all populated Coeur d’Alene lands, as well as fiber optic cable, technical staff, and operational costs. The grant will make high-speed Internet access available to all residents at the Community Technology Center, shown in figure 5, at no cost, and high-speed Internet access to homes and businesses will be available for purchase. The grant will also provide tribal members training in computer use and maintenance. Tribal officials told us that after the first 2 years of operation, they expect to earn sufficient revenue from system subscribers to fund needed maintenance and improvements. The Mescalero Apache in New Mexico used RUS loans to overcome financial barriers and establish their own telecommunications company. The tribe also borrowed equipment from an equipment manufacturer until it was able to purchase its own. Tribal officials told us that their former service provider had not invested adequate funds in the telecommunications network on Mescalero Apache tribal lands to provide high quality voice or data services. They added that, as a result, telephone service was poor and high quality voice and data services, such as Internet access, were not widely available. The Mescalero Apache Tribal Government purchased the telecommunications network from the local telephone company that had been providing service on the tribal land. The tribe formed Mescalero Apache Telecommunications, Inc. (MATI) to develop this network and directed the company to focus on providing services to all Mescalero Apache lands and not just on maximizing profit. MATI then rebuilt the system, putting in more than 1,000 miles of fiber-optic cable and providing high-speed Internet access as well as local and long distance telephone service. According to a MATI official, telephone and high-speed Internet access, such as DSL, are now nearly universally available within reservation boundaries. MATI has deployed various high-speed Internet access services to tribal businesses and schools. Figure 6 shows the Mescalero Apache School computer lab which utilizes MATI-provided Internet connectivity. The Yakama Nation in Washington established a long-range plan to overcome its financial barriers by using funds saved over the past few years through reduction of the tribal government’s use of telecommunications services from its provider. The tribe is using these savings to develop its own telecommunications system to provide telephone and high-speed Internet access. The tribe is also using monies from the negotiation of utility rights-of-way. The tribal government made the decision to develop its own telecommunications company several years ago, partly in response to the increase in monthly telecommunications charges levied by the local provider, which raised the tribe’s annual cost from $275,000 to $325,000. At that time, the tribe put together a long-range plan that required the tribe to reduce its use of the current provider’s services, and use the resulting savings to develop its own system. A tribal official told us that long-range financial planning and careful budgeting have been important to the tribe’s success and that infrastructure has been purchased or installed each year based on what the tribe could afford. Since 1998, the tribe has used annual savings from reduced telephone services and funds from other services to establish a telecommunications company, and then purchase related equipment. The tribe was able to purchase this fiber optic cable at 25 percent of its retail price and negotiated with a local contractor to install the fiber at a price far below the market rate. The tribe plans to sell the equipment necessary to connect to the new telecommunications system to tribal members and tribal businesses. The Eastern Band of Cherokee in North Carolina overcame financial barriers by partnering with another local business to build a fiber optic cable network throughout and beyond its tribal lands to provide high-speed Internet access and transport. The Eastern Band of Cherokee’s tribal lands are located in the Smokey Mountains and are geographically isolated from major metropolitan areas that have Internet access points. As a result, it is expensive to connect infrastructure in the area to the nearest high-speed Internet access points. A tribal official told us that the tribe’s service provider did not expand or upgrade the telecommunications infrastructure on tribal lands because the provider did not find the additional investment in infrastructure to be profitable. (The provider representative told us that providing DSL to most parts of the reservation would not be profitable as the land is rugged and rural, and to connect many of those who live out in remote rural areas would require an investment that would be difficult to justify.) A tribal official told us that one example of the poor service quality is an outage that occurred within the past year. All communications services were unavailable for 48 hours in 6 counties because a cut was made in the company’s copper wire. Since the system has no backup provision, there was no service until the cut was repaired. The Cherokee told us their casino lost millions of dollars during the outage, and that the loss for the region as a whole was estimated at $72 million. To improve service and offer residents on tribal lands high-speed Internet access, the tribe partnered with a local corporation that provides electronic income tax filing services, and had also suffered financial loss from the recent outage. Together, the tribe and the corporation are constructing a fiber optic cable network, both on and off tribal lands. Figure 7 shows fiber being deployed for this network. The Eastern Band of Cherokee and their partner have formed a company that will act as both a wholesaler and a retailer of telecommunications services. A company official told us that because of the cost of putting in the fiber and the low density of the service area, a private, for-profit company would never have made this level of investment. Officials of the tribe and the company told us that the tribe will use its ownership in these networks and future planned deployment of cable and wireless infrastructure to ensure that all residents of tribal lands can receive high-speed Internet, VoIP (Voice over Internet Protocol), and other information and content applications at costs and quality levels comparable to or better than metropolitan areas. Several tribes we interviewed have focused their efforts on wireless technologies to help address the barriers of tribal lands’ rural, rugged location and tribes’ limited financial resources, with funding for these efforts coming from both public and private sources. Service providers and equipment manufacturers told us that wireless service is often less expensive to deploy across large distances than wireline service because wireless infrastructure, such as a tower, is less expensive to deploy than a wireline infrastructure. Examples of tribes focusing on wireless technologies include the following: Several tribes have deployed shared spectrum wireless networks to provide high-speed Internet access. For example, the Southern California Tribal Chairman’s Association (SCTCA), a consortium of 17 federally recognized tribes, received a grant from a private foundation to establish a wireless network, called the Tribal Digital Village Network (TDVNet), to provide high-speed Internet access to all 17 tribes. SCTCA tribes are located in Southern California in remote and hilly terrain and scattered across 150 square miles. In addition to its low cost, TDVNet utilizes shared spectrum technologies because the equipment can operate on solar power. This is particularly important in remote areas where electrical power may not be available. TDVNet staff are also developing Voice over Internet Protocol (VoIP) capabilities to provide telephone service over high-speed Internet access in those tribal communities where the deployment of wireline service is cost prohibitive. The Coeur d’ Alene and the Washoe Tribe of Nevada and California are deploying similar networks. Service provider officials in Alaska told us that satellite telecommunications systems are the only telecommunications options to provide telephone service for many Alaska Native Villages because the vast distances from these areas to existing infrastructure make wireline systems too expensive to install. A major Alaska service provider is utilizing a combined satellite and shared spectrum wireless network to extend high-speed Internet access to many Alaska Native Villages. In addition, 2 tribes we visited addressed their need for improved telecommunications services by encouraging wireless companies to compete with wireline providers for customers on their lands. In both cases, the wireless companies have obtained status as an ETC and are able to obtain universal service funds, particularly the High Cost Fund and Enhanced Lifeline and Enhanced Linkup, to profitably provide service in these areas. The Oglala Sioux in South Dakota encouraged a wireless company to provide service in the area in order to improve services and reduce the cost of telephone service to the tribal land customers. According to tribal and wireless service provider officials, the key to developing this solution was the wireless provider’s ability to use universal service funds to help subsidize the costs of its network and offer discounted telephone service to tribal land residents. To access universal service funds, the wireless provider, with consent from the tribe, applied to FCC for ETC status, which was granted in 2001, enabling the wireless provider to access universal service funds. The tribe also worked with the provider to create an expanded local calling area that included all areas of the reservation and the town of Rapid City, South Dakota. According to a tribal official, the addition of Rapid City, South Dakota, as part of the local calling area was an important cost-saving measure for the tribe because a significant number of Oglala Sioux live in the Rapid City area. According to tribal and service provider officials, this wireless service allows tribal members to reach public safety services from nearly any location on tribal lands. A tribal official said that this is particularly important due to the tribe’s large land area, remote location, and the summer and winter weather extremes in the area. The tribal official also told us that the wireless provider initially anticipated having about 300 customers on the Oglala Sioux’s Pine Ridge Indian Reservation land, but had about 4,000 customers within 1 year of offering service. The Navajo government has encouraged 2 wireless providers to offer services on Navajo lands in competition with wireline providers. The Navajo Nation encourages providers to deploy wireless telecommunications networks because providing wireline telecommunications throughout the Navajo Nation is cost prohibitive due to the tribe’s large land area, which is about the size of West Virginia. Census data indicate that residents on Navajo lands in Arizona, New Mexico, and Utah are among the most economically distressed groups in the United States. Tribal officials told us that competition is the best method to lower prices and improve services. One wireless provider has been able to access universal service funds to make service more affordable. Officials from wireless companies told us that access to universal service program funds combined with the use of less costly wireless technologies provides a viable business case for entry onto Navajo lands. Some tribes we visited discussed ways they were developing technical expertise in telecommunications, while others spoke of the importance of the technical expertise they had, particularly in helping them plan for telecommunications improvements. Tribal, industry, and government stakeholders said that training in telecommunications technologies provides tribal members with some of the necessary skills to operate the tribes’ own telecommunications networks. Several tribal officials told us that having staff with the technical expertise necessary to plan and manage telecommunications improvements was critical to their efforts. However, less than half of the tribal officials we interviewed told us that their tribes have developed telecommunications plans or estimated the cost of planned improvements. One tribe that has taken steps to get needed technical training is the Coeur d’Alene Tribe. The tribe plans to provide two colleges with access to its new high-speed Internet system in exchange for distance learning classes and technical training. Similarly, the Yakama Nation has proposed to connect a local university to its telecommunications system in exchange for technical training for its staff. A Yakama official emphasized that having trained staff to manage and maintain the telecommunications system once it is operational is very important, and the tribe determined that this kind of exchange with a local university would help provide the staff with the necessary training. The Mescalero Apache Tribe has improved its technical capacity by hiring technically trained staff, and has created a technical mentoring program. MATI hired tribal and non-tribal members to operate its telephone company. Although about half of MATI’s staff consists of non-tribal members, tribal officials expect to hire more tribal members as they receive the necessary training. Many of the employees who are not tribal members are experienced and technically proficient. MATI has created a mentoring program partnering the experienced and technically trained employees with newer employees. The goal is to create a self-sufficient tribal staff with the knowledge to understand and operate a telecommunications network. In addition, the company offers technical consulting services to other tribes that are interested in providing their own telecommunications network. MATI also hosts an annual telecommunications conference for tribes and municipal governments to inform them about the basics of telecommunications finance and technology. In addition, MATI has used its technical expertise to explore new ways to deploy telecommunications services. Figure 8 shows MATI’s Voice over Internet Protocol service platform that it utilizes as a means to send voice conversations over the Internet. To address the current lack of computer and Internet knowledge among its tribal members, the Coeur d’Alene Tribe plans to provide training and Internet access at the Community Technology Center as long as their budget permits. Those attending training will be assisted by the recently hired technical staff in repairing and refurbishing computers that belonged to tribal offices, and will be allowed to keep the computers for home use once the work is complete. The Yakama Nation and Eastern Band of Cherokee also plan to train tribal members in computer and Internet use at an existing tribal technology center. Officials of several tribes told us that having staff with technical expertise was critical to their efforts to plan their telecommunications. For example, a tribal official of the Rincon Band of Luiseno Mission Indians of the Rincon Reservation, told us that a tribal member with technical knowledge determined the need for improved Internet access and identified the appropriate technology (wireless broadband). He also identified a funding opportunity to bring high-speed Internet access to 17 Southern California tribes, most of which did not have Internet access because of geographic barriers and prohibitive infrastructure costs. Officials of 14 of the 38 tribes and tribal organizations we interviewed told us that they have developed a technology plan. An official of the Coeur d’Alene Tribe told us that plans are important to ensure that tribes have selected technologies that are appropriate for their tribal needs and geography. All 6 of the tribes we visited are taking actions to improve their telecommunications based on plans they developed. Most of the tribal officials we interviewed told us that their tribes do not have cost estimates for improving telecommunications. The Coeur d’Alene tribal official told us that determining the cost of new systems and making plans to pay for these improvements is important. This official added that plans should not only include information about how to finance the system, but should also describe the means to pay for training of staff so they will have the technical expertise required to maintain and manage the current or proposed system. For example, Yakama Nation and Coeur d’Alene tribal officials stated that they designed telecommunications systems that will produce revenue from customers sufficient to pay for improvements, maintenance, and technically trained staff. Navajo Nation officials and service providers told us that the Navajo Nation’s right-of-way approval process is time consuming and expensive, which has delayed or deterred the deployment of telecommunications infrastructure on Navajo land. For example, an official from one service provider told us that this tribal approval process impedes service because the timeline for obtaining tribal council approval varies for each right-of-way application, tribal departments can differ on the goals and price of the right-of-way, and it takes extra time for these departments to reach consensus. A Navajo official agreed that their right-of-way processes can delay deployment of telecommunications infrastructure and increase its cost because timelines vary for each application. Another official told us that a major reason for this slow process is that tribal entities involved in Navajo’s internal rights-of-way process have different opinions on the goals and price of telecommunications rights-of-ways. For example, some tribal officials expect high up-front rights-of-way fees based on their experiences for granting rights-of-way for natural resources like coal, which would typically produce a higher revenue stream than telecommunications. To address this issue, Navajo officials are developing an approach to reduce the time and expense required to obtain tribal consent for a telecommunications right-of-way across their land. The Navajo Nation Telecommunications Regulatory Commission (NNTRC) has drafted a policy to streamline tribal consent for telecommunications rights-of-way. (Figure 9 shows the NNTRC’s headquarters in Window Rock, Arizona.) One of NNTRC’s functions is to decrease the barriers service providers encounter while deploying telecommunications infrastructure on the land. Through information gathering sessions between commissioners and service providers, the commission determined that the Navajo process for the approval of telecommunications rights-of-way needed to be changed because the deployment of telecommunications services was being delayed. In order for NNTRC to make changes to the Navajo right-of-way process, the Tribal Council first granted NNTRC full authority over telecommunications issues, such as rights-of-way for telecommunications infrastructure. To address the barriers service providers encounter with the Navajo right-of-way process, NNTRC drafted a policy that grants NNTRC the sole responsibility for providing tribal approval for a right-of-way. This would allow “one stop shopping” for the service providers, who would no longer have to coordinate with multiple tribal departments and offices. According to a Navajo official, this policy is currently being reviewed for approval by several of their tribal government departments. Following this approval process, NNTRC intends to implement this policy. In addition, NNTRC officials stated that there is a more feasible price structure for telecommunications rights-of-way that better reflects the market value of telecommunications rights-of-way. This price structure would include an upfront payment covering the market value of the land plus an additional percent of future earnings from the equipment. The officials told us that this type of arrangement would assist the service provider’s business case because the provider would have to release less capital in the beginning of the project, while offering telecommunications services to Navajo residents. Once the infrastructure begins to produce a revenue stream and has a viable business case, the Navajo Nation would receive a percentage of these funds for the life of the infrastructure. Under the principles of universal service, as established by Congress, FCC has recognized the need to promote telecommunications deployment and subscribership on tribal lands. Despite improvements in both deployment and subscribership of telecommunications services, as of 2000, Native Americans on tribal lands still lag significantly behind the rest of the nation. The underlying cause of this problem is difficult to determine because of a paucity of current information about both deployment and subscribership of telecommunications for Native Americans on tribal lands. Moreover, this lack of adequate data makes it difficult for FCC and Congress to assess the extent to which federal efforts designed to increase telecommunications deployment and subscribership on these lands is succeeding. One difficulty we found relates to a statutory provision in the Communications Act which precludes some tribal libraries from benefiting from a universal service program. The current statutory provision does not allow tribal libraries to obtain E-rate funding for libraries unless the tribal library is eligible for assistance from a state library administrative agency under LSTA. In at least two cases, tribes have not applied for E-rate funds because their tribal libraries are not eligible for state LSTA funds. However, FCC has stated that it cannot modify the eligibility criteria in the statute. Clarifying this issue could help bring high-speed Internet access to more residents of tribal lands through their tribal libraries. In reviewing how some tribes are addressing barriers to improving telecommunications services on tribal lands, we found that tribes took a variety of approaches for addressing these barriers, suggesting that flexibility in planning and implementing telecommunications improvements on tribal lands is important. Because circumstances vary widely, we do not know the extent to which other tribes and Alaska Native Villages may be able to benefit from the experiences of these six. However, given that many tribes and Alaska Native Villages face similar barriers, policy makers working to assist tribes and Alaska Native Villages in improving telecommunications may want to consider the approaches employed by these tribes. Congress should consider directing FCC to determine what additional data is needed to help assess progress toward the goal of providing access to telecommunications services, including high-speed Internet, for Native Americans living on tribal lands; determine how this data should regularly be collected; and report to Congress on its findings. To facilitate Internet access for tribal libraries, Congress should consider amending the Communications Act of 1934 to allow libraries eligible for Library Service and Technology Act funds provided by the Director of IMLS to either a state library administrative agency or to a federally recognized tribe to be eligible for funding under the E-rate program. We provided a draft of this report for comment to BIA, the Census Bureau, NTIA, FCC, General Services Administration, Institute of Museum and Library Services, and RUS. BIA provided written comments, presented in appendix IV, stating that BIA recognized the need to update its rights-of-way regulations to include advanced telecommunications infrastructure, and is working to include this in its trust related regulations. BIA stated that it will issue a Rights-of-Way Handbook in March 2006, to ensure consistent application of existing regulations. RUS and the General Services Administration responded that they had no comments. The Institute of Museum and Library Services provided written comments, found in appendix V, stating that the report accurately reflected its understanding of the relevant issues and concerns. NTIA offered technical comments, as did the Census Bureau and FCC, which we have incorporated where appropriate. In the draft report, we recommended that the Chairman of the Federal Communications Commission direct FCC staff to determine what additional data is needed to help assess progress toward the goal of providing access to telecommunications services, including high-speed Internet, to Native Americans living on tribal lands; determine how this data should be regularly collected; and report to Congress on its findings. In oral comments responding to our recommendation, FCC agreed that additional data is needed to help assess progress toward the goal of providing access to telecommunications services, including high-speed Internet. However, FCC did not agree that it is the organization best positioned to determine the data needed in this context, noting that other federal agencies and departments possess expertise and more direct authorization to determine whether and what economic and demographic data are needed to support policy making. In view of FCC’s disagreement with our recommended action, we have made it a matter for Congressional consideration. We continue to believe that FCC, as the agency responsible under the Communications Act for the goal of making available, as far as possible, telecommunications at reasonable charges to all Americans, is the appropriate agency to determine what data is needed to advance the goal of universal service and support related policy decisions—especially for Native Americans on tribal lands who continue to be disadvantaged in this regard. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, tribal organizations and governments, Bureau of Indian Affairs, Census Bureau, Economic Development Administration, Federal Communications Commission, General Services Administration, Indian Health Service, Institute of Museum and Library Services, National Science Foundation, National Telecommunications and Information Administration, Rural Utilities Service, Universal Service Administrative Company, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no cost on the GAO web site at http://www.gao.gov. If you have any questions about the report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix VI. The objectives of this report were to determine: 1) the status of telecommunications subscribership (telephone and Internet) for Native Americans on tribal lands in the lower 48 states and Alaska; 2) federal programs available for improving telecommunications services on tribal lands; 3) the barriers that exist to improving telecommunications on tribal lands; and 4) how some tribes have addressed these barriers. To respond to the objectives of this report, we gathered information from a variety of sources. Specifically, we gathered information by (1) reviewing material relevant to telecommunications on tribal lands from federal, state, Native American, academic, non-profit, and private sources; (2) interviewing federal and state regulatory agency officials; (3) interviewing officials of national and regional Native American organizations; (4) interviewing officials of telecommunications provider and equipment manufacturer organizations; (5) conducting telephone interviews of tribal officials on 26 tribal lands and 12 Alaska regional native non-profit organizations; and (6) making site visits to six tribal lands. To provide information on the status of telecommunications subscribership for Native Americans on tribal lands in Alaska and the lower 48 states, we analyzed data from the 2000 decennial census. To determine telephone subscribership, we used Census 2000 data product, American Indian and Alaska Native Summary File. This summary file includes tabulations of the population and housing data collected from a sample of the population (within most Native American and Alaska Native areas, 1 in every 2 households). In these areas, there must be at least 100 people in a specific group, including Native American and Alaska Native tribal groupings, before data will be shown. In our analysis of this 2000 Census data we did not include Native individuals or households located in Oklahoma Tribal Statistical Areas (OTSA). OTSAs are statistical entities identified and delineated by the Census in consultation with federally recognized Native American tribes in Oklahoma that do not currently have a reservation, but once had a reservation in that state. Boundaries of OTSAs are those of the former reservations in Oklahoma, except where modified by agreements with neighboring tribes for data presentation purposes. We also excluded all other tribal lands in the Census 2000 data that were not federally recognized. As a result of these exclusions and the Census reporting threshold, the data show 198 lower 48 tribal lands and 131 Alaska Native Villages for people who indicated their race, alone or in combination, as American Indian and/or Alaska Native. We assessed the reliability of the data from the Census Bureau by interviewing knowledgeable agency officials about data collection methods, particularly those pertaining to collection of data on tribal lands, reviewing existing documentation on Census data, and conducting electronic testing of the data. We determined that the data were sufficiently reliable for the purposes of this report. To determine the status of Internet subscribership on tribal lands, we spoke to the Census Bureau about the Current Population Survey (CPS). The CPS is a monthly survey of households conducted by the Census Bureau for the Bureau of Labor Statistics, and is designed primarily to produce national and state estimates for characteristics of the labor force. To obtain national and state estimates on Internet subscribership rates, supplemental questions on Internet and computer use have been added to the CPS questionnaire. However, the CPS sample cannot provide reliable estimates of Internet subscribership on tribal lands. To determine the availability of federal programs that improve telecommunications on tribal lands, we interviewed agency officials from the Federal Communications Commission (FCC), the Universal Service Administrative Company (USAC), the Rural Utilities Service (RUS), the National Telecommunications and Information Administration (NTIA), the Bureau of Indian Affairs (BIA), the Economic Development Administration (EDA), the Indian Health Service (IHS), the Institute of Museum and Library Services (IMLS), the National Science Foundation (NSF) and the General Services Administration (GSA). To determine the funding amounts for these programs, we reviewed annual federal budget data and agency documents. To learn about FCC programs targeted to tribal lands, we interviewed tribal officials, FCC staff, and service providers. To learn the amount of funds disbursed and number of program subscribers for Enhanced Lifeline and Enhanced Linkup, we obtained information from the Universal Service Administrative Company. To assess the reliability of the FCC’s data for the Enhanced Lifeline and Enhanced Linkup programs, we interviewed agency officials knowledgeable about the data and the systems that produced them. The FCC does not track this information by tribal lands; however, we determined that the data were sufficiently reliable to present the total amount of money disbursed by year and the total number of subscribers to these programs by year. To assess the reliability of FCC’s data on Tribal Land Bidding Credits, we interviewed agency officials knowledgeable about the data and the systems that produced them. We determined that the data were sufficiently reliable for the purposes of our report. To learn what barriers exist to improve telecommunications services on tribal lands, we analyzed information from various federal agencies, such as the Census Bureau, FCC, the Department of Commerce, as well as reports from a private foundation, the Benton Foundation and a national tribal organization, the National Congress of American Indians. We reviewed two previous studies of telecommunications technology on tribal lands. We also reviewed testimony from hearings before the Senate Committee on Indian Affairs and the House of Representatives Committee on Financial Services and Committee on Resources. We conducted interviews with national and regional tribal organizations, major local service providers, selected wireless equipment manufacturers, and non- profit organizations that have contributed to improving telecommunications on tribal lands. Finally, we conducted interviews with officials of 26 tribes and 12 Alaska regional native nonprofit organizations. We selected officials of tribal lands for interviews by first separating the Alaska Native Villages from the federally recognized reservations in the lower 48 states because telecommunications infrastructure in Alaska differs from that of the lower 48 due to Alaska’s weather and terrain. To learn about the barriers facing Alaska Native Villages and the efforts to overcome them, we interviewed officials from 12 Alaska regional native nonprofit organizations. To learn about the barriers facing tribes in the lower 48 states, we interviewed tribal officials from a total of 26 of the more than 300 tribal lands of the lower 48 states, selected by using demographic and economic indicators from both 1990 and 2000 Census data for natives and nonnatives, as well as information from various reports, studies and testimonies on individual tribal efforts to improve telecommunications. To select tribes in the lower 48 states to interview, we focused on the larger and more populated tribal lands in the lower 48 states, using Census data to select those tribes with populations over 100 persons and those tribal lands larger than one square mile. We also excluded tribal lands for which there was no 1990 Census data because without this data we could not identify change in telephone subscribership rates from 1990 to 2000. We then grouped the remaining tribal lands into 8 population categories, ranging in size from over 30,000 to under 500. Having postulated that the major barriers to increased telephone subscribership might be associated with poverty, geographic isolation, and lack of technical skills, we used the 1990 and 2000 Census data to determine for each of these tribal lands the percent of the population at or below the poverty level, the mileage of tribal lands from the closest population center of over 15,000, the percent of those over 25 without a high school diploma, and the change in telephone subscribership rate from 1990 to 2000. We selected tribal lands from each of the 8 population groups with a range of scores on the above described criteria. Within the group of tribal lands that met the above criteria, we also strove to select tribal lands, where possible, from different geographic regions of the county. Using this methodology, we selected 21 tribal lands for interview. We used data from the 1990 and 2000 decennial censuses’ American Indian and Alaska Native summary file. In addition to the 21 tribal lands selected, we also selected five tribal lands that had made efforts to improve telecommunications. We learned about these tribes from our analysis of documents from FCC, a national tribal organization, scholars and nonprofit organizations, as well as from our interviews with tribes, tribal organizations, service providers and equipment manufacturers. Tribes’ efforts included establishing tribally owned telecommunications companies, introducing new technologies to provide Internet access, developing programs to provide technical training for tribal members, and establishing a tribal regulatory agency to improve telecommunications, including the rights-of-way processes on tribal land. The telephone interviews conducted with officials from these 26 tribal lands and 12 Alaska regional native nonprofit organizations covered topics such as which companies provide wireline and wireless telephone service and Internet access on tribal lands; what factors contributed to any change in telephone subscribership rates from 1990 to 2000 (as derived from Census data); any barriers tribes faced in improving telecommunications services on tribal lands; how those barriers had been addressed; tribes’ experience with applying for various federal programs and with providers seeking Eligible Telecommunications Carrier status or applying for Tribal Lands Bidding Credits. Based on our analysis of the compiled research and interviews, we determined that tribes faced barriers in one or more of the following four categories: financial, geographic, technical, or rights-of-way. From our interviews, we identified 11 tribes as potential candidates for site visits because they were confronting one or more of these four barriers, had made progress in improving telecommunications services on their lands, and as a group, represented a range of population and tribal land sizes, as well as geographic locations. We then selected 6 of these tribes for site visits, assuring that, as a group, they represented all of the identified barriers and were located in different geographic regions of the lower 48 states. In addition to interviewing tribal officials at the six sites we visited, we also interviewed officials of some of the companies that provided telecommunications service to those sites regarding their views about the barriers to improving telecommunications services on tribal lands. We conducted our audit work from August 2004 through December 2005 in Washington, D.C., and at the Coeur D'Alene Tribe of the Coeur D'Alene Reservation, Idaho; Confederated Tribes and Bands of the Yakama Nation, Washington; Eastern Band of Cherokee Indians of North Carolina; Oglala Sioux Tribe of the Pine Ridge Reservation, South Dakota; Mescalero Apache Tribe of the Mescalero Reservation, New Mexico; and Navajo Nation in Arizona, New Mexico, and Utah. Our work was conducted in accordance with generally accepted government auditing standards. Northern Cheyenne Tribe of the Northern Cheyenne Indian Reservation Oglala Sioux Tribe of the Pine Ridge Reservation Paiute-Shoshone Indians of the Bishop Community of the Bishop Colony Rincon Band of Luiseno Mission Indians of the Rincon Reservation San Carlos Apache Tribe of the San Carlos Reservation Three Affiliated Tribes of the Fort Berthold Reservation White Mountain Apache Tribe of the Fort Apache Reservation B. Alaska Regional Native Non-Profit Organizations Alaska Communications Systems Group Inc. American Indian Higher Education Consortium Cheyenne River Sioux Tribe Telephone Authority Mescalero Apache Telecommunications, Inc. National Indian Telecommunications Institute Organization for the Promotion and Advancement of Small Telecommunications Companies San Carlos Apache Telecommunications Utility, Inc. We visited six tribes—the Coeur d’Alene of Idaho, the Yakama of Washington, the Eastern Band of Cherokee of North Carolina, the Mescalero Apache of New Mexico, the Oglala Sioux of South Dakota, and the Navajo of Arizona, New Mexico, and Utah—to determine how they approached their particular barriers to improving their telecommunications services. These tribes vary in size, geography, and other characteristics. In addition, we discussed approaches to overcoming barriers with officials of other tribes, service providers, and other entities, and found that tribes use numerous approaches to overcome the barriers they face. The approaches taken by a tribe often address more than one barrier. The Coeur d’Alene, whose tribal lands cover 523 square miles in northern Idaho, used an overall strategy of developing the tribe’s own system to provide high-speed Internet access for tribal members. Within this telecommunications strategy, the tribe’s particular approaches included applying for and obtaining an RUS grant, negotiating for rights-of-ways, and developing technical expertise. The Coeur d’Alene’s tribal lands are located about 27 miles from Coeur d’Alene, Idaho, the nearest population center of 15,000 or more inhabitants. According to the 2000 Census, 1,303 Native Americans were living on the Coeur d’Alene lands. The estimated per capita income for Native Americans on Coeur d’Alene lands was $10,267, or less than half the national estimate of $21,587, while the poverty level was 28 percent, 15.6 percent above the national estimate of 12.4 percent. The unemployment level was 18 percent, or 12.2 percent above the national unemployment level of 5.8 percent. According to tribal officials, the tribe’s major barriers to improved telecommunications services included the following: Financial: Many tribal residents are poor and a tribal official said many cannot afford high-speed Internet service. This official told us that the Coeur d’ Alene face an underemployment problem, as many people are employed but are paid low wages and have little money to spend on communications services. This official also told us that in addition, the tribe itself does not have the funds to pay for telecommunications equipment and services for its residents. Geographic: Service providers have not expanded the telecommunications infrastructure across the tribe’s lands or upgraded the infrastructure to provide high-speed Internet access, partly because the large land area consisting of hilly and mountainous terrain makes expansion of the infrastructure expensive. According to a Coeur d’Alene tribal official, service providers determined that the cost of infrastructure expansion or improvement was too great to offer service to a limited number of tribal land residents, many of whom could not afford high speed Internet access. Lack of tribal technical capacity: A tribal official told us that the tribe does not have a sufficient number of technically knowledgeable staff members to develop and maintain needed telecommunications systems. Rights-of-way: This became an issue for the tribe after it decided to put up its own wireless system. Tribal officials told us that they could not afford to pay the prices asked by some landowners and residents within reservation boundaries for rights-of-way to locate equipment on their land. To obtain better telecommunications services, the tribe decided to develop its own telecommunications system that would offer high-speed Internet access to all residents. One of the tribal members who had received technical training and was knowledgeable about high-speed Internet access determined that such access was possible at affordable rates and that the tribe’s large and rugged land area made a wireless system the least expensive choice. According to a tribal official, high-speed Internet access will improve access to business and educational opportunities, telemedicine services, and better enable the tribe to preserve its language and history. Since the tribe did not have sufficient funds to develop a telecommunications system on its own, the technically trained tribal member applied for an RUS Community Connect grant. This type of grant can be used for expenditures for a wide array of infrastructure and related needs, such as household and business connection equipment as well as the construction of a community technology center. In May 2003, the tribe was awarded a $2.8 million grant that will be used to pay for five towers, fiber optic cable, equipment to send and receive wireless signals for all tribal households and businesses, technical staff to deploy and operate the system for 3 years, operational costs, and the community technology center. As of July 2005, the system was complete and operating. The technically trained tribe member is now managing the system. Once the tribe received the grant, it had to overcome the barriers of 1) obtaining rights-of-way in order to locate equipment and 2) developing a technically knowledgeable staff to eventually operate the planned system. Rather than paying for rights-of-way across private land, the tribe acquired the rights-of-way they needed for access roads and equipment in exchange for connections to the system. To address the current lack of technical knowledge among tribal residents, the tribe is working with two local colleges to increase its technical knowledge. The tribe is offering the college access to its new broadband system in exchange for distance learning classes and technical training. The tribe has also made plans to receive technical training from the Mescalero Apache Tribe, which owns its own system and provides training in telecommunications. In addition, to increase interest among tribal members in Internet access and computer usage, the tribal government plans to provide tribal members with training and Internet access at the tribe’s community technology center for as long as its budget will allow. Those attending training will be assisted by the recently hired technical staff in repairing and refurbishing computers that belong to the tribe and are no longer needed. They will be allowed to keep the computers for home use once the work is complete. Services are being offered for free for 2 years to the Benewah Medical Center, local libraries, fire and police departments on tribal land, as well as tribal and local public schools. The system will also make telemedicine services available so that those who are uninsured or underinsured can obtain the expertise of physicians not located on tribal lands. In addition, tribal members and non-tribal members will have high-speed Internet access at the community center at no cost. However, there will be a fee for high-speed Internet access to homes for tribal and non-tribal members living within reservation boundaries. Tribal officials told us that, after the first 2 years of operation, they expect to earn sufficient revenue from subscribers within tribal boundaries to fund needed maintenance and improvements, as well as offset the costs of operating the Community Technology Center. Additionally, tribal officials told us that they are planning to purchase a local cable company to acquire the company’s lines and the rights-of-way that the company has negotiated across land within reservation boundaries. The tribe is hoping to use revenue from the broadband Internet system to provide broadband through cable services to current and future customers. Tribal officials expect the broadband services to attract businesses and are planning to provide technical support to new businesses on tribal lands, such as writing software. The Yakama Nation, whose lands encompass 2,153 square miles in south central Washington, is developing its own telecommunications system that will offer wireless telephone and high-speed Internet access to all tribal land residents. The tribe has developed a long-range plan to finance development through savings accumulated over several years, mainly by reducing the amount of services purchased from the incumbent telecommunications provider and negotiating rights-of-way for telecommunications infrastructure. The Yakama Nation’s tribal lands are located about 24 miles from Yakima, Washington, the nearest population center of 15,000 or more inhabitants. According to the 2000 Census, 31,646 residents were living on Yakama tribal lands, 7,756 of them being Native Americans. Estimated per capita income for Native Americans on Yakama lands was $8,816 or less than half the national estimate of $21,587, while the poverty level was 31 percent, 18.6 percent above the national estimate of 12.4 percent. Unemployment levels were 23 percent, or 17.2 percent above the national unemployment level of 5.8 percent. According to the tribal official with whom we spoke, the tribe’s major barriers to improved telecommunications services included the following: Financial: According to the tribal official, in the past few years, the tribe’s main industry, timber, has not done well, and unemployment rates and poverty have been above the national average. Many residents cannot afford telephone service and some of those who are not connected cannot afford the installation cost to become connected to the current infrastructure. The tribal official told us that many tribal members cannot afford a computer or Internet access, and the Internet access that is available is mostly low-speed dial-up service. The tribal official also said that the in the past few years, the local service provider had raised its recurring monthly charges, resulting in an annual bill to the tribe of $325,000, an increase of $50,000 in annual costs, which was difficult for the tribal government to afford. Geographic: While many tribal residents in the more heavily populated areas have access to telephone service, the tribal official told us that the tribe’s service provider has not built additional infrastructure to reach less populated areas and has no plans to do so in the near future. In addition, the tribal member told us that the service provider had established calling zones that make calls from one part of the reservation to another long distance. This has increased the cost of telephone service for both residents and the tribal government. Lack of Tribal Technical Capacity: The tribal official stated that the tribe does not have a sufficient number of technically knowledgeable tribal members to develop and maintain needed telecommunications systems. The Yakama Nation is addressing these barriers by developing its own telecommunications system that will provide wireless telephone service and high-speed Internet access to the tribal government and the community at large. The tribal official told us that seven years ago, the tribe determined that it could improve telecommunications services by forming its own company, offering telecommunications services to tribal residents and tribal businesses as well as other homes and businesses, both on and off tribal lands. This official also said the tribe has developed a business plan to receive its license from the state of Washington to operate as a competitive local exchange carrier, allowing it to sell its services. The tribal official told us the system will improve education by providing high-speed Internet access to tribal schools and offer residents greater access to jobs and business opportunities. The tribal official also told us that although the system is not yet complete, the Yakama Tribal Government buildings are now connected to each other through a Local Area Network (LAN) and have high-speed Internet access. This level of service has reduced the fees the tribe pays to the local service provider, allowing the tribe to increase the funding available for developing its own telephone telecommunications system. To overcome the funding barrier, the tribe put together a long-range plan that required the tribe to reduce its use of the current provider’s services and then use the savings to develop its own system. Since 1998, the tribe has used annual savings from reduced telephone services and funds from other services to establish a telecommunications company and then purchase needed equipment. The technically trained tribal member who headed the planning and development of this system told us that because of the downturn in the telecommunications sector in the past few years and the long-range plans the tribe had made, the tribe was able to purchase surplus fiber at 25 percent of its retail price. In addition, the tribe was also able to negotiate with a local contractor for installation of the fiber at a price far below market rates. The tribal official told us that long-range financial planning and careful budgeting have been important to the tribe’s success and that infrastructure has been purchased or installed each year based on what the tribe could afford. The tribe is addressing its lack of technical capacity in a number of ways. The tribe has proposed to connect a local university to its telecommunications system in exchange for technical training. In addition, the tribe plans to train residents in computer and Internet use at an existing tribal technology center. The tribal official emphasized that determining how the tribe could afford the cost of trained staff to manage and maintain the system once it is operational was a very important part of their planning. The tribe determined that the system could produce revenue to pay for technically trained staff and necessary maintenance by offering wireless telephone and high-speed Internet access to areas adjacent to tribal lands. The tribe plans to erect additional towers; offer homes and businesses the opportunity to purchase equipment to connect to the system; and connect the tribally-owned system to the public switched network. The tribal official told us that several locations are available to connect to the public switched network and they will select the location that offers the tribe the best price. The tribal official estimates that the system will be complete in 1 to 2 years. The Eastern Band of Cherokee, whose tribal lands cover about 82 square miles in the Smoky Mountains of western North Carolina, has improved telecommunications infrastructure and services, particularly high-capacity transmission and Internet-based services, by deploying two fiber networks -- a tribally-owned fiber-optic ring within the reservation area, and a jointly- owned fiber optic network in three states. To build these networks, Eastern Band of Cherokee partnered with a local business, provided part of the funding, and is applying for a USDA RUS loan jointly with their partner company. The Eastern Band of Cherokee’s tribal lands are located about 33 miles from Asheville, North Carolina, the nearest population center of 50,000 or more inhabitants. According to the 2000 Census, there were 6,132 Native Americans living on Eastern Band of Cherokee’s tribal land. The estimated per capita income for Native Americans on Eastern Band of Cherokee lands was $12,248, somewhat more than half the national estimate of $21,587, while the poverty level was 24 percent, 11.6 percent above the national estimate of 12.4 percent. The unemployment level was 9 percent, or 3.2 percent above the national unemployment level of 5.8 percent. Tribal officials told us that the major barrier to improved telecommunication services the Eastern Band of Cherokee faced was: Geographic: Tribal lands are geographically isolated by the Smokey Mountains and there is low population density in the area. According to a tribal telecommunications company official, prices for fiber-optic transmission networks and high-speed Internet access points are many times higher than in major metropolitan areas, where such connections are plentiful and competitively priced. A major contributor to the high cost of service is the transmission of data. This official said that voice, data, and Internet traffic from this rural mountain community must be hauled long distances for aggregation and connection to the national backbones of telecommunications and Internet service providers. The carriage provided by the local telephone company is priced at rates that are distance sensitive, making them some of the highest in the state. However, according to a tribal official, despite the local provider’s prices, the provider’s current telecommunications infrastructure on Eastern Band of Cherokee’s tribal lands is out of date and malfunctions frequently, causing interruptions in service. To improve access to fiber-optic infrastructure and to lower the cost of transmission for Internet service providers, as well as for schools, hospitals, rural clinics, government agencies and residents on tribal lands, the tribe constructed two fiber-optic networks. The first is a network that provides access within the reservation; the second is a network that provides an interconnecting network through parts of three states and is referred to as a middle-mile network. According to one of the tribal telecommunications company officials we interviewed, the middle-mile network is a very high-capacity network that can move large amounts of information at high speeds with plenty of capacity for future growth. This official told us that to deploy this middle-mile network, the tribe partnered with a private firm, one of the largest electronic tax filers in the United States and one of the largest employers in the region after the tribe. Together, they formed a joint venture company to construct, own, and operate the network. The company official also told us that the joint venture company leases dark fiber and also operates as a certificated competitive local exchange carrier and interexchange carrier in three states. The networks support very high capacities for real-time, interactive applications such as three-dimensional modeling and simulation. The company also offers open access to its dark fiber on short-term and long- term leases (up to 20 years) to any requesting entity and sells its fiber and services at rates pegged to the wholesale rates being charged in large metropolitan areas. The company official stated that system deployment began in September, 2003, with completion expected by the end of 2005 and will consist of about 257 miles of underground fiber optic cable. A tribal official told us the tribe wanted to help attract new businesses to the area as well as help existing companies modernize and expand. Of equal importance to the tribe are improvements and enhancements in government services, health care and education, and residential Internet access. A telecommunications company official told us the joint venture has already begun providing wide-area data and Internet transmission services for a four-site hospital system in the area, greatly reducing the hospital system’s costs and providing throughput speeds of only 6 seconds for transmission of x-ray images between sites. Officials of the tribe and the company told us that the tribe will use its ownership in these networks and future planned deployment of cable and wireless infrastructure to ensure that all residents of tribal lands can receive high-speed Internet, VoIP (Voice over Internet Protocol), and other information and content applications at costs and quality levels comparable to or better than metropolitan areas. The tribe is currently planning facilities and programs for computer training laboratories for tribal members to learn about computers, networks, and the Internet, and is also planning for workforce retraining programs. The Mescalero Apache reservation covers 719 square miles and is located in south eastern New Mexico. The Mescalero Apache addressed their telecommunications issues by purchasing the local telephone company with the help of RUS loans and developing initiatives to improve the tribe’s technical capacity to provide telephone service and high-speed Internet access. According to the 2000 Census, there were 2,932 Native American residents living on Mescalero Apache land. The estimated per capita income for Native American residents was $7,417, slightly more than one-third the national estimate of $21,587, while the level of poverty was 37 percent, 24.6 percent above the national estimate of 12.4 percent. The unemployment level was 17 percent, 11.2 percent above the national unemployment level of 5.8 percent. According to tribal officials, before the Mescalero Apache purchased the local telecommunications company, the tribes’ major barriers to improving telecommunications service included the following: Geographic: The size of the reservation makes the deployment of wireline infrastructure expensive and the small number of tribal residents limits the ability of the service providers to recoup their investment. Tribal officials told us that the former local service provider was unwilling to upgrade the telecommunications network on the Mescalero Apache reservation to provide high-quality voice or data services. Lack of Tribal Technical Capacity: In 1995, the tribal Council passed a resolution stating the tribe’s intention to purchase the former telephone service provider’s network. However, the tribe did not have a sufficient base of technically knowledgeable tribal members to operate the former provider’s telephone network. To overcome these barriers, the tribal government purchased the former wireline service provider’s network on the reservation. The tribal government then formed the company, Mescalero Apache Telecommunications, Inc. (MATI), to develop this network to provide higher quality telecommunications services than previously available. MATI then rebuilt the network by installing more than 1,000 route miles of fiber optic cable to provide high-speed Internet access as well as local and long distance telephone service. According to a MATI official, telephone and high-speed Internet access are now nearly universally available within the reservation and Gigabit Ethernet, which is nearly 1,000 times faster than DSL, has been deployed to the Mescalero casino. In addition, this MATI official told us that the number of residential telephone subscribers on the Mescalero Apache tribal lands has increased from 10 per cent to 97 percent since these improvements were made to the network. To address the geographic issue, the MATI official said that the tribal government instructed MATI to focus on providing services to the reservation rather than maximizing profit, which could limit investment in services. Additionally, MATI utilizes various approaches to improve its technical capacity to offer higher quality services. Specifically, it developed strategic relationships and training to improve the staff’s technical capabilities to operate telecommunications technologies. For example, the MATI official told us that when MATI was starting to provide service, MATI was able to borrow a switch from a manufacturer. Currently, MATI has an agreement with a VoIP equipment manufacturer to deliver voice calls over the Internet. This agreement has allowed MATI to begin to deploy this technology to customers outside the reservation over a shared spectrum wireless network. The MATI official said that this relationship has also allowed MATI to train their personnel on the use of this equipment. The MATI official also told us that MATI created a technical mentoring program to build tribal telecommunications capacity. Although about half of MATI’s staff consists of non-tribal members, tribal officials expect to hire more tribal members as they receive technical training and non-tribal members retire. Newer tribal staff are paired with experienced non-tribal staff for the purpose of learning telecommunications technologies. The MATI official said that the goal is to create a self-sufficient tribal knowledge base to understand and operate the telecommunications network. This official said that MATI’s development of its technical capabilities has also allowed it to offer technical consulting services to other tribes that are interested in providing their own telecommunications network. For example, Coeur d’Alene tribal officials told us that they plan to use MATI staff to train some of their telecommunications staff and increase the tribe’s technical capacity to operate a telecommunications network. The MATI official also told us that MATI hosts an annual telecommunications conference for tribes and municipal governments to inform them about the basics of telecommunications finance and technology. Oglala Sioux lands cover approximately 3,150 square miles and are located in southwestern South Dakota. To improve telecommunications services on their tribal lands, the Oglala Sioux partnered with Western Wireless Corporation (now merged with Alltel), a wireless service provider, to offer wireless phone service on their lands in competition with the wireline provider. According to tribal and Western Wireless officials, access to the Universal Service High Cost Fund and Enhanced Link-Up and Lifeline programs allows Western Wireless to recover some infrastructure deployment costs and offer discounted telephone service to residents of the Oglala Sioux’s Pine Ridge Indian Reservation. The Oglala Sioux tribal lands are located in southwestern South Dakota, about 80 miles south of Rapid City, South Dakota, the nearest population center of 50,000 or more inhabitants. According to the 2000 Census, 14,334 Native Americans were living on these tribal lands. The estimated per capita income for Native Americans was $5,624, slightly more than one- quarter the national estimate of $21,587, while the poverty level was 55 percent, more than 40 percent above the national estimate of 12.4 percent. The unemployment level was 37 percent, or 32.2 percent above the national unemployment level of 5.8 percent. According to tribal and industry officials, the tribe’s major barriers to improved telecommunications services included the following: Financial: According to a tribal official, tribal members have limited financial resources to purchase telecommunications services. Census data indicate that the Pine Ridge Indian Reservation is one of the most economically distressed tribal lands in the United States. Over one half the population falls below the federal poverty line while unemployment is more than six times the national estimate. Geographic: The Pine Ridge Indian Reservation is geographically isolated and has a low population density, which according to the tribal official, has limited the number of companies interested in providing telecommunications services. According to the 2000 Census, approximately 14,000 Oglala Sioux were living on the 3,150 square mile reservation, an area about one and half times the size of Delaware. The tribal official also told us that geographic isolation of the Pine Ridge Indian Reservation also meant that it was difficult for tribal members to reach public safety services when traveling through remote areas of the reservation. To overcome these barriers, the Oglala Sioux partnered with a wireless service provider to offer wireless phone service to residents of the Pine Ridge Indian Reservation. The Oglala Sioux Tribe and the wireless provider signed a service agreement to formalize this partnership. The agreement defined the provider’s responsibilities to provide wireless phone service and the tribe’s responsibilities and rights to advertise the service and receive leasing fees for the wireless towers on its land. According to a tribal official and provider officials, the key to deploying wireless service on the Pine Ridge reservation was the provider’s ability to access federal universal service funds to subsidize its network costs (High Cost fund) and offer discounted telephone service (Enhanced Link-Up and Lifeline). In order to access these funds, the provider, with consent from the Oglala Sioux Tribe, applied for and received an eligible telecommunications carrier (ETC) designation from FCC in 2001. This enabled the provider to access High Cost funds as well as the Enhanced Link-Up and Lifeline programs, which lower the costs of telephone service for low-income customers. The provider deployed several towers in diverse areas of the reservation to provide wide-spread coverage. The tribe also worked with the provider to create an expanded local calling area for its customers that included all areas of the reservation as well as Rapid City, South Dakota. According to a tribal official, the addition of Rapid City as part of the local calling area was an important cost-saving measure for the tribe because a significant number of Oglala Sioux live in the Rapid City area. A tribal official told us that wireless telephone service has improved public safety and the general quality of telecommunications service on the Pine Ridge reservation. According to tribal and provider officials, tribal members can reach public safety services, such as 911, from nearly any location on the reservation. According to a tribal official, this is particularly important due to the summer and winter temperature extremes on the reservation. The wireline service provider officials also noted that the wireless provider’s presence as a competitor has helped to sharpen their focus on providing high-quality services. A tribal official told us that the wireless provider initially anticipated having about 300 customers on Oglala Sioux tribal lands, but had about 4,000 customers within 1 year of offering service. The Navajo Nation is the largest federally recognized tribe and tribal land in the United States. According to the 2000 Census, the Navajo Nation covers over 24,000 square miles, an area roughly the size of West Virginia, and extends into the states of Arizona, New Mexico and Utah. To improve telecommunications on their lands, the Navajo are streamlining the tribal rights-of-way process to aid service providers; encouraging competition in order to improve prices and service quality; and emphasizing wireless technologies better suited to the geography of the tribal land. The Navajo Nation’s tribal lands are not located near any major metropolitan area. According to the 2000 Census, 176,256 Native Americans were living on Navajo tribal lands. The estimated per capita income for Native Americans on Navajo lands was $6,801, less than one- third the national estimate of $21,587, while the poverty level was 44 percent, 31.6 percent above the national estimate of 12.4 percent. The unemployment level was 26 percent, or 21.2 percent above the national unemployment level of 5.8 percent. Several telecommunications providers, both wireline and wireless, serve the Navajo Nation; however, not all areas within the reservation have access to voice or data service. Two providers provide high-speed Internet connectivity on parts of the reservation. One of them offers DSL to households at various places on the reservation. However, an official from this company noted that DSL works best if deployed within 15,000 feet of the central office, while many residents live beyond the 15,000-foot limit. The other provider offers high-speed Internet connections through satellite at 110 Navajo Nation chapter houses. However, one tribal official told us that the tribal chapter house connections are not financially sustainable in the long term. All three states (Arizona, New Mexico, and Utah) granted a library designation to the 110 chapter houses, and all chapter houses were approved by USAC for library E-rate funds. This official also stated that the tribe uses E-rate funds to pay for about 85 percent of the annual $3 million cost for satellite connectivity. However, the official told us that the tribe must pay the remaining 15 percent of the cost, or about $450,000, and that Navajo officials consider this amount to be a high cost. According to tribal officials, the tribe’s major barriers to improving telecommunications services include the following: Geographic: Geographic isolation has increased the cost of providing service on Navajo lands and limited the number of companies interested in providing telecommunications services. The distances needed to connect communities and homes with copper wires or fiber optic cable make wireline telecommunications systems expensive. For example, the tribe estimated in 1999 that it cost about $5,000 to connect a new wireline subscriber. The installation of wireless infrastructure is also expensive due to the vast network of towers and power access needed to relay signals around the rugged landscape. Service providers have told us the cost of deploying telecommunications infrastructure on Navajo lands impedes the provision of services. Rights-of-way: According to tribal officials, several factors combine to make obtaining rights-of-way across Navajo trust lands difficult, and serve as deterrents to extending and improving the tribe’s telecommunications infrastructure. Both service provider and tribal officials told us that the tribal government’s process for approving rights-of-way across trust lands is time-consuming and expensive. In addition, tribal officials told us that obtaining approval of rights-of-way from BIA across Indian allotments within tribal boundaries can also be very time-consuming and expensive because ownership of these lands has been divided among a large number of heirs, and at least 51 percent of the heirs must approve any change in the status of the land. In some cases, the location of many of these heirs is unknown. To address these barriers and improve telecommunications services on the reservation, tribal leaders formed the Navajo Nation Telecommunications Regulatory Commission (NNTRC). The Navajo Nation requires service providers to supply the NNTRC with information about their intended service areas, service offerings, and network buildout plans. This information allows the NNTRC to review providers’ plans for providing services and then holds them accountable for fulfilling those plans. The NNTRC encourages providers to attend hearings to comment on the barriers they encounter to providing telecommunications services. As a result, the NNTRC works with the service providers to reduce or remove barriers. The NNTRC is addressing geographic barriers by encouraging providers to deploy wireless telecommunications systems that are more appropriate for the Nation’s large geographic area. NNTRC is also addressing the cost of services on the Navajo Nation by encouraging multiple providers to offer services, thus creating competition. NNTRC officials told us that competition is the best method to lower prices and improve services. Currently, NNTRC works with wireless companies to encourage them to extend their service throughout the Navajo Nation. Officials from wireless companies serving and seeking to serve the Navajo Nation told us that access to universal service program funds combined with their use of less costly wireless technologies provides a viable business case for their entry onto Navajo lands. Tribal officials told us that the NNTRC drafted a rights-of-way policy that includes new procedures to make the tribe’s process for approving rights- of-way more efficient and timely for service providers. According to a Navajo official, this policy is currently being reviewed for approval by several of their tribal government departments. Following this approval process, NNTRC intends to implement this policy. In addition to the contact named above, Carol Anderson-Guthrie and John Finedore, Assistant Directors; Edda Emmanuelli-Perez, Michele Fejfar, Logan Kleier, Michael Mgebroff, John Mingus, Mindi Weisenbloom, Alwynne Wilbur, Carrie Wilks, and Nancy Zearfoss made key contributions to this report.
An important goal of the Communications Act of 1934, as amended, is to ensure access to telecommunications services for all Americans. The Federal Communications Commission has made efforts to improve the historically low subscribership rates of Native Americans on tribal lands. In addition, Congress is considering legislation to establish a grant program to help tribes improve telecommunications services on their lands. This report discusses 1) the status of telecommunications subscribership for Native Americans living on tribal lands; 2) federal programs available for improving telecommunications on these lands; 3) barriers to improvements; and 4) how some tribes are addressing these barriers. Based on the 2000 decennial census, the telephone subscribership rate for Native American households on tribal lands was substantially below the national level of about 98 percent. Specifically, about 69 percent of Native American households on tribal lands in the lower 48 states and about 87 percent in Alaska Native villages had telephone service. While this data indicates some progress since 1990, changes since 2000 are not known. The U.S. Census Bureau is implementing a new survey that will provide annual telephone subscribership rates, though the results for all tribal lands will not be available until 2010. The status of Internet subscribership on tribal lands is unknown because no one collects this data at the tribal level. Without current subscribership data, it is difficult to assess progress or the impact of federal programs to improve telecommunications on tribal lands. The Rural Utilities Service and the FCC have several general programs to improve telecommunications in rural areas and make service affordable for low-income groups, which would include tribal lands. In addition, FCC created some programs targeted to tribal lands, including programs to provide discounts on the cost of telephone service to residents of tribal lands and financial incentives to encourage wireless providers to serve tribal lands. However, one of FCC's universal service fund programs that supports telecommunications services at libraries has legislatively based eligibility rules that preclude tribal libraries in at least two states from being eligible for this funding. FCC officials told GAO that it is unable to modify these eligibility rules because they are contained in statute and thus modifications would require legislative action by Congress. The barriers to improving telecommunications on tribal lands most often cited by tribal officials, service providers, and others GAO spoke with were the rural, rugged terrain of tribal lands and tribes' limited financial resources. These barriers increase the costs of deploying infrastructure and limit the ability of service providers to recover their costs, which can reduce providers' interest in investing in providing or improving service. Other barriers include the shortage of technically trained tribal members and providers' difficulty in obtaining rights of way to deploy their infrastructure on tribal lands. GAO found that to address the barriers of rural, rugged terrain and limited financial resources that can reduce providers' interest in investing on tribal lands, several tribes are moving toward owning or developing their own telecommunications systems, using federal grants, loans, or other assistance, and private-sector partnerships. Some are also focusing on wireless technologies, which can be less expensive to deploy over rural, rugged terrain. Two tribes are bringing in wireless carriers to compete with the wireline carrier on price and service. In addition, some tribes have developed ways to address the need for technical training, and one has worked to expedite the tribal decision-making process regarding rights-of-way approvals.
User fees or user charges are defined by OMB as assessments levied on a class of individuals or businesses directly benefiting from, or subject to regulation by, a government program or activity. Examples of user fees are trademark registration fees, park entrance fees, and food inspection fees. User fees represent the principle that identifiable individuals or businesses who receive benefits from governmental services beyond those that accrue to the general public should bear the cost of providing the service. General user fee authority was established under title V of the Independent Offices Appropriation Act (IOAA) of 1952. The IOAA gave agencies broad authority to assess user fees or charges on identifiable beneficiaries by administrative regulation. This does not authorize agencies to retain and/or use the fees they collect. In the absence of specific legislation that authorizes agencies to retain and/or use the fees they collect, fees must be deposited in the U.S. Treasury general fund. Authority to assess user fees may also be granted to agencies through the enactment of specific authorizing or appropriations legislation, which may or may not authorize the agencies to retain and/or use the fees they collect. OMB Circular A-25, dated July 8, 1993, establishes guidelines for federal agencies to use in assessing fees for government services and for the sale or use of government property or resources. The Circular (1) states that its provisions shall be applied by agencies in their assessment of user charges under the IOAA and (2) provides guidance to agencies regarding their assessment of user charges authorized under other statutes. A specific user fee rate or amount may be based on the full cost to the government of the service or goods provided or on market value, or may be set legislatively. The Circular outlines the circumstances under which agencies are to use cost recovery or market value for determining the fee amount. It defines full cost as all direct and indirect costs to any part of the federal government of providing goods or services, including, but not limited to, direct and indirect personnel costs (i.e., salaries and fringe benefits); overhead costs (i.e., rents and utilities); and management and supervisory costs. The Circular defines market value as the price for goods, resources, or services that is based on competition in open markets and creates neither a shortage nor a surplus of the goods, resources, or services. In some cases, legislation either sets the specific user fee rate or amount or stipulates how the fee is to be calculated, such as a formula. These fees can be based on partial cost recovery, partial market value, or some other basis. For example, the Social Security Administration’s (SSA) fees for administration of state supplementary payments are legislatively set at $6.20 per payment for fiscal year 1998. An example of partial cost recovery is under Public Law 98-575, which excludes the recovery of overhead costs from the National Aeronautics and Space Administration’s commercial space launch services fees. Both the CFO Act and OMB Circular A-25 provide that agencies review their user fees biennially. The CFO Act of 1990 requires an agency’s CFO to review on a biennial basis the fees, royalties, rents, and other charges for services and things of value and make recommendations on revising those charges to reflect costs incurred. OMB Circular A-25 provides that each agency will review user charges biennially to include (1) assurance that existing charges are adjusted to reflect unanticipated changes in costs or market values and (2) a review of other programs within the agency to determine whether fees should be initiated for government services or goods for which it is not currently charging fees. Circular A-25 further states that agencies should discuss the results of the user fee reviews and any resultant proposals in the CFO annual report required by the CFO Act. The Circular also states that when the imposition of user charges is prohibited or restricted by existing law, agencies will review activities and recommend legislative changes when appropriate. Periodic reviews of all user fees are important because the reviews can provide agencies, the administration, and Congress with information on the government’s costs to provide these services or, in some cases, the current market value of goods and services provided. To obtain the information for the first three objectives, we requested the CFOs of the 24 agencies to provide for fiscal year 1996 (1) a list of all user fees, (2) the basis (cost recovery, market value, or legislatively set) for determining the fee amount, (3) total amount of user fees collected in fiscal year 1996, and (4) supporting documents for the most recent review they had conducted of each user fee between fiscal years 1993 and 1997. We used 1996 fees because 1996 was the most recent year agencies had complete data. We reviewed the supporting documentation of the fee reviews to determine whether the reviews (1) indicated that direct and indirect costs were determined (if the fee was based on cost recovery) or current market value was determined (if the fee was based on market value) and (2) included an assessment of other programs within the agency to identify potential new user fees. We followed up with agency program officials when necessary to clarify the CFOs’ responses. We also reviewed Federal Register notices for fiscal years 1993 through 1997 that discussed fee revisions and how the fees were calculated. In addition, we reviewed prior reports by the agencies’ Inspectors General (IG) and us that covered user fees in CFO agencies during the time period covered by the scope of our work. We did not verify whether agencies reported all of their user fees. To obtain information on the fourth objective, we reviewed the CFO annual reports for fiscal years 1995 through 1997 and requested information from the 24 agencies on whether they reported the results of reviews in the CFO reports during fiscal years 1993 and 1994. To determine whether agencies were more likely to review fees if the fees were authorized to be used to cover agencies’ expenses compared to when they were not, we obtained information from each of the agencies on whether they had legislative authority to use fees they collect. We then compared the number of reviews of fees that agencies were allowed to keep with the number of reviews of those that they were not allowed to keep. We reviewed relevant laws and regulations pertaining to user fees, including the CFO Act of 1990, the IOAA and other user fee authorizing legislation, and OMB Circular A-25. We also reviewed OMB Bulletins 94-01 and 97-01, Form and Content of Agency Financial Statements, to determine whether they contained user fee reporting requirements. We met with OMB officials to obtain additional information on OMB’s user fee review and reporting requirements. In some cases, agencies said they did not formally conduct “biennial fee reviews” but instead periodically, generally annually, conducted fee rate updates that met the key requirements of a biennial review. In these instances, we considered the rate updates as user fee reviews. In those cases where agency documentation indicated that agencies determined the direct and indirect costs of providing services, we did not verify that both direct and indirect costs had been considered or that the types of costs considered were appropriate. Our previous work has concluded that, in general, the federal government does not have adequate cost accounting systems to track costs to specific programs or services. To audit each individual cost factor for the fees we reviewed was beyond our scope and would have involved more time and resources than were available. Our scope did not include fees charged to other federal agencies or federal employees. We also excluded insurance premiums because, according to an OMB official, they were not subject to Circular A-25 during the scope of our review. We excluded credit-related fees, such as loan guarantee fees, since OMB advised that credit-related fees were not covered by Circular A-25, but were governed by OMB Circular A-129, Policies for Federal Credit Programs and Non-Tax Receivables. We did our work at the 24 CFO agencies’ headquarters in Washington, D.C., between June 1997 and June 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of the Office of Management and Budget and asked the Chief Financial Officers of the 24 agencies included in the review to verify the accuracy of their agencies’ data used in the report. Their comments are discussed near the end of this letter. As table 1 shows, the 24 CFO agencies reported having 546 total user fees in effect in fiscal year 1996. Agencies reported that 397 of their fees were based on cost recovery, 35 were based on market value, and 114 were set by legislation. As previously stated, statute-based formulas can be based on either market value, cost recovery, or some other basis. Of the 24 CFO agencies with 546 reported user fees, 6 agencies reviewed all of their reported fees at least biennially as required by Circular A-25 during fiscal years 1993 through 1997, 3 reviewed all of their reported fees at least once, 11 reviewed some of their reported fees, and 4 did not review any of their reported fees during this period. The agencies reported that they had reviewed 259 of the fees annually, 159 biennially, and 34 once during this 5-year period, as shown in table 2. According to OMB Circular A-25, agencies should have reviewed the fees at least biennially. The fee reviews that were conducted annually or biennially were in compliance with the Circular. Excluding the three newly effective fees in table 2, 13 agencies did not comply with the Circular for 31 fees that were reviewed only once during the 5-year period. All of the 31 fees were in effect long enough to have had biennial reviews. Fifteen of the 24 CFO agencies had not reviewed 94 user fees at all during the 5-year period. These 94 fees were about 17 percent of the total 546 fees. The agencies provided various reasons for not conducting the reviews. For example, the Department of the Treasury’s U.S. Customs Service reported that it had not reviewed its nine fees (reported as totaling over $1 billion in fiscal year 1996) because of insufficient cost data. Customs said that it was in the process of developing the necessary data to evaluate the fees and make recommendations to Congress on any necessary changes. The U.S. Agency for International Development reported that it did not review its three fees because the amount of user fees collected was minimal (reported as $50,000 for fiscal year 1996). SSA said that it had not reviewed its eight fees because the majority of its fees were either legislatively set or were based on the actual computation of the full cost to provide the service. According to an agency official, SSA was currently conducting a review of two of its fees and stated that four additional fees will be reviewed in conjunction with the agency’s comprehensive evaluation of its fee charging policy. Of the 94 fees not reviewed, 42 were set by legislation. The 42 fees represent about 37 percent of the 114 fees set by legislation and about 45 percent of the fees that agencies had not reviewed. Several agencies reported that they had not reviewed the fees set by legislation because they believed the fees were either not subject to the user fee review requirements or could not be changed unless legislation was amended. For example, the Department of Veterans Affairs and the Department of Health and Human Services’ Food and Drug Administration reported that they had not reviewed fees that were set by legislation because they believed the fees were not subject to the CFO Act. The Department of Transportation’s Federal Aviation Administration (FAA) and the Department of Health and Human Services’ Health Care Financing Administration (HCFA) reported that they did not review the fees because they believed the fees could not be changed unless legislation was amended. However, OMB Circular A-25 provides that all fees, including those set by specific legislation, be reviewed. One rationale for reviewing all user fees, even those where a policy decision was made to not recover full costs, is that the extent to which fees do not recover the direct and indirect costs—i.e., the government subsidy—should be transparent so that program managers can properly inform the public, Congress, and federal executives about the extent of the subsidy. OMB Circular A-25 provides that the user fee review include assurance that existing charges reflect costs or current market value. Of the 397 cost-based fees, agencies reviewed 357. For 352 (or about 99 percent) of the cost-based fees reviewed, documentation indicated that both direct and indirect costs were considered. Agencies had reviewed 23 of the 35 fees based on market value. Documentation indicated that current market value was assessed for 14 of the 23 reviewed fees. Overall, the reviews determining whether fees reflected cost or current market value resulted in 159 fee increases that became effective during the period we reviewed. We did not verify whether the agencies had appropriate cost accounting systems in place to identify all direct and indirect costs or whether the costs included were complete and appropriate. However, problems with CFO agencies’ cost systems were one of the reasons given by the CFO Council in June 1997 for requesting the Financial Accounting Standards Advisory Board to delay implementation of SFFAS No. 4. Prior work by agency IGs and us has also shown that agencies often lack cost accounting systems to track costs by specific program or service. In 1998, we reported in our audit of the U.S. Government’s 1997 Consolidated Financial Statement that the government was unable to support significant portions of the more than $1.6 trillion reported as the total net costs of government operations. We further stated that without accurate cost information, the federal government is limited in its ability to control and reduce costs, assess performance, evaluate programs, and set fees to recover costs where required. We also stated in the report that, as of the date of the report, only four agency auditors had reported that their agency’s financial systems complied with the Federal Financial Management Improvement Act (FFMIA) of 1996 requirements for financial management systems. In 1996 and 1997, we reported that while three Power Marketing Administrations (PMA), with reported revenues of $997 million in fiscal year 1996, were generally following applicable laws and regulations regarding recovery of power-related costs, they were not recovering all costs. Although PMAs are required to recover all costs, they had not done so, partly because they did not follow the full cost definition as set forth in OMB Circular A-25. In addition, IGs within 6 of the 24 CFO agencies reported on weaknesses in agencies’ procedures for determining the cost of goods or services for which there were user fees during the 5-year period covered by our scope. Also, in reference to market value assessments, we reported in 1996 and 1998 that the Department of Agriculture’s U.S. Forest Service did not always obtain the fair market value for user fees covering the use of federal land. OMB Circular A-25 provides that agencies’ user fee reviews should include a review of other agency programs to determine whether additional fees should be charged either under existing authority or by proposing new legislative authority. Of the 20 agencies that conducted user fee reviews, documentation indicated that seven agencies considered new fees, five agencies did not consider new fee opportunities because they did not provide a service for which a fee was not already charged, and eight agencies where the potential for new fees existed did not consider new fee opportunities. Agencies’ reasons for not looking for new fee opportunities varied. The Department of Veterans Affairs reported that it views its nonfee services as goodwill to the community, and the agency would have to obtain legislative authority to charge for the nonfee services. An FAA official said FAA had not attempted to identify new individual user fees pending the outcome of the ongoing consideration being given to the financial restructuring of FAA, which was included in legislation proposed to Congress on April 20, 1998. The Department of Commerce’s Bureau of the Census said that it is facing the task of achieving the best balance between maximizing the usefulness of data to the widest possible audience and charging for more of the information. HCFA reported that it had looked at potential user fees earlier and decided that the new fees would not be in the best interest of the government because either the cost of fee collection would have outweighed the expected revenues or the agency and the recipient benefited equally from the service. OMB Circular A-25 provides that agencies should discuss the results of the user fee reviews and any resultant proposals in the CFO annual reports required by the CFO Act. The act requires that the CFOs of the 24 agencies identified in the act submit an annual financial management report to the Director of OMB. To satisfy this CFO reporting requirement, agencies submit annual, audited financial statements. The CFO Act requires the Director of OMB to prescribe the form and content of the financial statements, consistent with applicable accounting principles, standards, and requirements. The CFO Act also requires that these agencies analyze the status of financial management and prepare and make their annual revisions to plans implementing the OMB governmentwide 5-year financial management plan. The OMB guidance is not clear as to how the user fee review results should be reported. Thirteen of the 24 CFO agencies had referenced the user fee reviews in either their annual financial statements or their annual revisions to the 5-year financial management plan between fiscal years 1993 and 1997 as follows: One agency reported review results in 4 of the 5 years. Five agencies reported review results in 2 of the 5 years. Seven agencies reported review results in 1 of the 5 years. Five of these seven agencies reported results for the first time in their fiscal year 1997 reports after we had asked about the reporting. Two of them said that they had not previously reported the reviews because the reporting guidance was not clear. The remaining three said (1) the total amount of fees was not material, (2) nonadherence was an oversight, and (3) prior reviews were informal and undocumented. The other 11 agencies reported that they had not reported the results of their biennial reviews, or lack thereof, in any of the CFO annual reports for fiscal years 1993 through 1997. As shown in table 3, eight agencies said they did not report the review results because either the total amount of fees was considered to be minimal and not material or the reporting requirements were confusing and not consistent with OMB guidance for the form and content of annual financial statements. Guidance for form and content states specifically what agencies should present in the annual financial statements and does not include the user fee reporting requirement. According to OMB officials, OMB has not provided any guidance on reporting the results of the user fee reviews other than Circular A-25. OMB agreed that Circular A-25 user fee reporting instructions need to be clarified and plans to address this during 1998, by updating Circular A-11, Preparation and Submission of Budget Estimates. An OMB official said Circular A-11 has a higher profile than Circular A-25 and was scheduled to be revised before Circular A-25. It did not appear that agencies placed significantly less emphasis on reviewing fees that went to Treasury’s general fund than on fees of which all or a portion were authorized to cover agency expenses. In 78 percent of the 452 fees agencies reviewed, all or a portion of the fees were authorized to cover or reimburse agency expenses. In 67 percent of the 94 fees agencies did not review, all or a portion of the fees were authorized to cover or reimburse agency expenses. Generally, the CFO agencies did not fully adhere to OMB Circular A-25 and the CFO Act user fee review provisions requiring that user fee rates be reviewed biennially. It did not appear that agencies placed significantly less emphasis on reviewing fees that were to be deposited in Treasury’s general fund than they placed on fees that were authorized to cover agencies’ expenses. The agencies did not review all of the fees that should have been reviewed and reviewed fees set by legislation less often than other fees. For example, only 6 of the 24 CFO agencies reviewed all of their user fees at least biennially. Also, some agencies could be recovering less than their actual costs when their fees are based on cost recovery because of a lack of adequate cost accounting systems in the government to identify actual costs. Further, eight of the agencies did not include a review of potential new user fees as required by OMB. As a result, the government may not be recovering the costs or the current market value, where appropriate, for the goods and services it provides. OMB’s guidance on how and where to report the results of user fee reviews is not clear. Many of the agencies reported that Circular A-25 user fee reporting instructions were confusing and had not reported the results of the user fee reviews in CFO reports. Administration officials and Congress, therefore, have incomplete information on whether the government is recovering costs of providing goods and services or is obtaining the current market value, where appropriate. We recommend that the Director of OMB clarify the user fee reporting instructions by specifying how agencies should report the results of their user fee reviews and address the issues of compliance with the biennial review requirements, including the requirements regarding statutorily set fees and agencies’ consideration of potential new user fees. We requested written comments on a draft of this report from the Director of the Office of Management and Budget and oral comments from the Chief Financial Officers of the 24 agencies on the accuracy of information in the draft report pertaining to the agencies. On June 12, 1998, we received written comments from OMB’s Assistant Director for Budget, which are included in appendix I. OMB commented that while it was pleased to see that most of the fees were reviewed annually or biennially, it shares our concern that agencies pay attention to the review and discussion requirements in the Chief Financial Officers Act of 1990 and OMB Circular A-25. OMB further stated that it will continue its efforts in 1998 to increase agency awareness and compliance with current CFO Act and Circular A-25 requirements. OMB said that it would highlight the requirements of user fee reviews in this year’s update to Circular A-11 to make agencies more fully aware of the requirements. As of June 29, 1998, we had received responses from 23 of the 24 CFO agencies. We had not received a response from the Department of Housing and Urban Development. Seventeen agencies provided oral comments, and six agencies provided written comments. Ten of the agencies responded that they either had no comments on the draft report or agreed with the information in the report. Nine of the agencies provided additional information on their user fee reviews or suggested technical changes, which we considered and incorporated within the report where appropriate. Four agencies raised programmatic or policy-related issues, as follows: SSA said that it had reviewed two of its fees annually and asked us to revise our data to recognize this. SSA provided documentation it believed would support its contention that the reviews had been done. However, in our view, the documentation SSA provided was not sufficient evidence that the user fee reviews met the requirements of Circular A-25. Accordingly, we did not revise our report as SSA had requested, and we informed SSA of our decision. SSA also said it had reviewed two other fees and was deciding the fee amounts, and we noted this in the report. The Department of Health and Human Services, the National Aeronautics and Space Administration, and the Small Business Administration raised policy-related issues, such as the need for biennial reviews in light of the new Managerial Cost Accounting Standards and whether the new user fee definition in Circular A-11 supersedes the Circular A-25 definition. We did not cover these types of issues in our review, but expect that OMB will consider such issues as it revises its instructions on user fee reviews. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and the Senate Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, and the Director of OMB. We will also make copies available to others upon request. Major contributors to this report are listed in appendix II. If you have any questions about the report, please call me on (202) 512-8387. Alan N. Belkin, Assistant General Counsel Jessica A. Botsford, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed agencies' adherence to the user fee review and reporting requirements in the Chief Financial Officers (CFO) Act of 1990 and Office of Management and Budget (OMB) Circular A-25, focusing on whether the agencies: (1) reviewed their user fee rates biennially during fiscal years (FY) 1993 through 1997; (2) determined both direct and indirect costs when reviewing fees based on costs or current market value for fees based on market value; (3) reviewed other programs within the agency to identify potential new user fees; and (4) reported the results of the user fee reviews in their CFO annual reports. GAO noted that: (1) six of the 24 CFO agencies reviewed all of their reported user fees at least every 2 years as required by OMB Circular A-25 during FY 1993 through FY 1997, 3 reviewed all of their reported fees at least once, 11 reviewed some of their reported fees, and 4 did not review any of their reported fees during this period; (2) the 24 agencies reported 546 user fees, of which 418 were reviewed either annually or biennially; (3) the agencies provided various reasons for not reviewing fees, including insufficient cost data and because some of the fees set by legislation could not be changed without new legislation; (4) it appeared that agencies did not place significantly less emphasis on reviewing fees that went to the Department of the Treasury's general fund than on fees authorized to cover agency expenses; (5) documentation provided by the agencies indicated that of the reviewed fees that were based on cost recovery, 99 percent included both direct and indirect costs; (6) fee review documentation indicated that of the 23 reviewed fees that were based on market value, 14 reviews included a determination of current market value; (7) GAO did not verify these cost data or market evaluations; (8) agency documentation also indicated that of the 20 agencies that conducted user fee reviews, 8 agencies that had the potential for new fees did not consider new fee opportunities in their reviews; (9) twelve of the 20 agencies either looked for potential new fees or reported that they did not provide a service for which a fee was not already charged; (10) eleven of the 24 agencies had not reported the results of their biennial reviews, or lack thereof, in their CFO annual reports for FY 1993 through FY 1997; (11) only six agencies reported the review results two or more times during the 5-year period; (12) most of the agencies not reporting their user fee reviews said they did not do so either because the total amount of the fees was considered to be minimal and not considered material or because they found the reporting requirements confusing; and (13) OMB agreed that reporting instructions for the user fee review need to be clarified and plans to address this matter during 1998, as it revises its instructions.
It would be useful at this point to describe several differences between multiemployer and single-employer plans. Multiemployer plans are established pursuant to collectively bargained agreements negotiated between labor unions representing employees and two or more employers and are generally jointly administered by trustees from both labor and management. Single-employer plans are administered by one employer and may or may not be collectively bargained. Multiemployer plans typically cover groups of workers in such industries as construction, retail food sales, and trucking, with construction representing 38 percent of all participants. In contrast, 47 percent of single-employer plan participants are in manufacturing. Multiemployer plans provide participants limited benefit portability in that they allow workers the continued accrual of defined benefit pension rights when they change jobs, if their new employer is also a sponsor of the same plan. This arrangement can be particularly advantageous in industries like construction, where job change within a single occupation is frequent over the course of a career. Single-employer plans are established and maintained by only one employer and do not normally offer benefit portability. Multiemployer plans also differ from so called multiple-employer plans that are not generally established through collective bargaining agreements and where many plans maintain separate accounts for each employer. The Teachers Insurance Annuity Association and College Retirement Equities fund (TIAA-CREF) is an example of a large multiple-employer plan organized around the education and research professions. TIAA-CREF offers a defined benefit contribution plan, in which contributions are accumulated over a career and paid out at retirement, often as an annuity. Below are some features that illustrate some key differences between single-employer and multiemployer plans: Contributions· In general, the same ERISA funding rules apply to both single- and multiemployer defined benefit pension plans. While ERISA and IRC minimum funding standards permit plan sponsors some flexibility in the timing of pension contributions, individual employers in multiemployer plans cannot as easily adjust their plan contributions. For multiemployer plans, contribution levels are usually negotiated through the collective bargaining process and are fixed for the term of the collective bargaining agreement, typically 2 to 3 years. Employer contributions to many multiemployer plans are typically made on a set dollar amount per hour of covered work, and thus to the number of active plan participants. With other things being equal, the reduced employment of active participants will result in lower contributions and reduced plan funding. Withdrawal liability· Congress enacted the Multiemployer Pension Plan Amendments Act (MPPAA) of 1980 to protect the pensions of participants in multiemployer plans by establishing a separate PBGC multiemployer plan insurance program and by requiring any employer wanting to withdraw from a multiemployer plan to be liable for its share of the plan’s unfunded liability. The law contains a formula for determining the amount an employer withdrawing from a multiemployer plan is required to contribute, known as “withdrawal liability.” This amount is based upon a proportional share of the plans’ unfunded vested benefits. Furthermore, if a participating employer becomes bankrupt, MPPAA requires that the remaining employers in the plan assume the additional funding responsibility for the benefits of the bankrupt employer’s plan participants. For single-employer plans, the sponsoring employer is liable only for the unfunded portion of its own plan or its current liability in a bankruptcy (distress termination). Different premiums and benefit guarantee levels· PBGC operates two distinct insurance programs, one for multiemployer plans and one for single-employer plans, which have separate insurance funds, different benefit guarantee rules, and different insurance coverage rules. The two insurance programs and PBGC’s operations are financed through premiums paid annually by plan sponsors, investment returns on PBGC assets, assets acquired from terminated single employer plans, and recoveries from employers responsible for underfunded terminated single employer plans. Premium revenue totaled about $973 million in 2003, of which $948 million was paid into the single-employer program and $25 million paid to the multiemployer program. Single-employer plans pay PBGC an annual flat-rate premium of $19 per participant per year for pension insurance coverage. Plans that are underfunded generally also have to pay PBGC an additional annual variable rate premium of $9 per $1,000 of underfunding for the additional exposure they create for the insurance program. In contrast, the only premium for multiemployer plans is a flat $2.60 per participant per year. PBGC guarantees benefits for multiemployer pensioners at a much lower dollar amount than for single- employer pensioners: about $13,000 for 30 years of service for the former compared with about $44,000 annually per retiree at age 65 for the latter. Financial assistance and the insurable event· PBGC’s “insurable event” for its multiemployer program is plan insolvency. A multiemployer plan is insolvent when its available resources are not sufficient to pay the level of benefits at PBGC’s multiemployer guaranteed level for 1 year. In contrast, the insurable event for the single-employer program is generally the termination of the plan. In addition, unlike its role in the single-employer program where PBGC trustees weak plans and pays benefits directly to participants, PBGC does not take over the administration of multiemployer plans but instead, provides financial assistance in the form of loans when plans become insolvent. A multiemployer plan need not be terminated to qualify for PBGC loans, but it must be insolvent and is allowed to reduce or suspend payment of that portion of the benefit that exceeds the PBGC guarantee level. If the plan recovers from insolvency, it must begin repaying the loan on reasonable terms in accordance with regulations. Such financial assistance is infrequent; for example, PBGC has made loans totaling $167 million to 33 multiemployer plans since 1980 compared with 296 trusteed terminations of single-employer plans and PBGC benefit payments of over $4 billion in 2002-2003 alone. The net effect of these different features is that there is a different distribution of financial risk among, employers, participants and PBGC under the multiemployer program, compared with PBGC’s single-employer program. Multiemployer member employers and participants bear far more financial risk, and PBGC, and implicitly the taxpayer, bear far less risk, under the multiemployer program. In addition, PBGC officials explained that the features of the multiemployer regulatory framework have also led to a lower frequency of financial assistance. They note that greater financial risks faced by employers and the lower guaranteed benefits assured participants create incentives for employers, participants, and their collective bargaining representatives to avoid insolvency and to collaborate in trying to find solutions to a plan’s financial difficulties. While multiemployer plan funding has exhibited considerable stability over the past two decades, available data suggest that many plans have recently experienced significant funding declines. Since 1980, aggregate multiemployer plan funding has been stable, with the majority of plans funded above 90 percent of total liabilities and average funding at 105 percent in 2000. Recently, however, it appears that a combination of stock market declines coupled with low interest rates and poor economic conditions has reduced the assets and increased the liabilities of many multiemployer plans. In PBGC’s 2003 annual report, the agency estimated that total underfunding of underfunded multiemployer plans reached $100 billion by year-end, from $21 billion in 2000, and that its multiemployer program had recorded a year-end 2003 net deficit of $261 million, the first deficit in more than 20 years. While most multiemployer plans continue to provide benefits at unreduced levels, the agency has also increased its forecast of the number of plans that will likely need financial assistance, from 56 plans in 2001 to 62 plans in 2003. Private survey data are consistent with this trend, with one survey by an actuarial consulting firm showing the percentage of fully funded client plans declining from 83 percent in 2001 to 67 percent in 2002. In addition, long-standing declines in the number of plans and worker participation continue. The number of insured multiemployer plans has dropped by a quarter since 1980 to fewer than 1,700 plans in 2003, according to the latest data available. Although in 2001, multiemployer plans in the aggregate covered 4.7 million active participants, representing about a fifth of all active defined benefit plan participants, this number has dropped by 1.4 million since 1980. Aggregate funding for multiemployer pension plans remained stable during the 1980s and 1990s. By 2000, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities, with the average plan funded at 105 percent of liabilities. As shown in figure 1, the aggregate net funding of multiemployer plans grew from a deficit of about $12 billion in 1980 to a surplus of nearly $17 billion in 2000. From 1980 to 2000, multiemployer plan assets grew at an annual average rate of 11.7 percent, to about $330 billion, exceeding the average 10.5 percent annual percentage growth rate of single-employer plan assets. During the same time period, liabilities for multiemployer and single-employer pensions grew at an average annual rate of about 10.2 percent and 9.9 percent respectively. A number of factors appear to have contributed to the funding stability of multiemployer plans, including: Investment strategy · Historically, multiemployer plans appear to have invested more conservatively than their single-employer counterparts. Although comprehensive data are not available, some pension experts have suggested that defined benefit plans in the aggregate are more than 60 percent invested in equities, which are associated with greater risk and volatility than many fixed-income securities. Experts have stated that, in contrast, equity holdings generally constitute 55 percent or less of the assets of most multiemployer plans. Contribution rates · Unlike funds for single-employer plans, multiemployer plan funds receive steady contributions from employers because those amounts generally have been set through multiyear collective bargaining contracts. Participating employers, therefore, have less flexibility to vary their contributions in response to changes in firm performance, economic conditions, and other factors. This regular contribution income is in addition to any investment return and helps multiemployer plans offset any declines in investment returns. Risk pooling · The pooling of risk inherent in multiemployer pension plans may also have buffered them against financial shocks and recessions, since the gains and losses of the plans are less immediately affected by the economic performance of individual employer plan sponsors. Multiemployer pension plans typically continue to operate long after any individual employer goes out of business because the remaining employers in the plan are jointly liable for funding the benefits of all vested participants. Greater average plan size · The stability of multiemployer plans may also partly reflect their size. Large plans (1,000 or more participants) constitute a greater proportion of multiemployer plans than of single-employer plans. (See figs. 2 and 3.) While 55 percent of multiemployer plans are large, only 13 percent of single-employer plans are large and 73 percent of single- employer plans have had fewer than 250 participants, as shown in figure 2. However, distribution of participants by plan size for multiemployer and single-employer plans is more comparable, with over 90 percent of both multiemployer and single-employer participants in large plans, as shown in figure 3. Although data limitations preclude any comprehensive assessment, available evidence suggests that since 2000, many multiemployer plans have experienced significant reductions in their funded status. PBGC estimated in its 2003 annual report that aggregate deficit of underfunded multiemployer plans had reached $100 billion by year-end, up from a $21 billion deficit at the start of 2000. In addition, PBGC reported a net accumulated deficit for its own multiemployer program of $261 million for fiscal year 2003, the first deficit since 1981 and its largest ever. (See fig. 4.) While most multiemployer plans continue to provide benefits at unreduced levels, PBGC has also reported that the deficit was primarily caused by new and substantial “probable losses,” increasing the number of plans it classifies as likely requiring financial assistance in the near future from 58 plans with expected liabilities of $775 million in 2002 to 62 plans with expected liabilities of $1.25 billion in 2003. Private survey data and anecdotal evidence are consistent with this assessment of multiemployer funding losses. One survey by an actuarial consulting firm showed that the percentage of its multiemployer client plans that were fully funded declined from 83 percent in 2001 to 67 percent in 2002. Other, more anecdotal evidence suggests increased difficulties for multiemployer plans. For example, discussions with plan administrators have indicated that there has been an increase in the number of plans with financial difficulties in recent years, with some plans reducing or temporarily freezing the future accruals of participants. In addition, IRS officials recently reported an increase in the number of multiemployer plans (less than 1 percent of all multiemployer plans) requesting tax-specific waivers that would provide plans relief from current funding shortfall requirements. As with single-employer plans, falling interest rates coincident with stock market declines and generally weak economic conditions have contributed to the funding difficulties of multiemployer plans. The decline in interest rates in recent years has increased the present value of pension plan liabilities for DB plans in general, because the cost of providing future promised benefits increases when computed using a lower interest rate. At the same time, declining stock markets decreased the value of any equities held in multiemployer plan portfolios to meet those obligations. Finally, because multiemployer plan contributions are usually based on the number of hours worked by active participants, any reduction in their participant employment will reduce employer contributions to the plan. Despite their relative financial stability, the multiemployer system has experienced a steady decline in the number of plans and in the number of active participants over the past 2 decades. In 1980, there were 2,244 plans, and by 2003 the number had fallen to 1,623, a decline of about 27 percent. While a portion of the decline in the number of plans can be explained by consolidations through mergers, few new plans have been formed - only 5, in fact, since 1992. Meanwhile, the number of active multiemployer plan participants has declined in both relative and absolute terms. By 2001, only about 4.1 percent of the private sector workforce was composed of active participants in multiemployer pension plans, down from 7.7 percent in 1980 (see fig. 5), with the total number of active participants decreasing from about 6.1 million to about 4.7 million. Finally, as the number of active participants has declined, the number of retirees increased – from about 1.4 million to 2.8 million, and this increase had led to a decline in the ratio of active (working) participants to retirees in multiemployer plans. By 2001, there were about 1.7 active participants for every retiree, compared with 4.3 in 1980. (See fig. 6.) While the trend is also evident among single-employer plans, the decline in the ratio of active workers to retirees affects multiemployer funding more directly because employer contributions are tied to active employment. The higher benefit payouts required for greater numbers of retirees, living longer, and the reduced employer contributions resulting from fewer active workers combines to put pressure on the funding of multiemployer plans. A number of factors pose challenges to the long-term prospects of the multiemployer pension plan system. Some of these factors are specific to the features and nature of multiemployer plans, including a regulatory framework that some employers may perceive as financially riskier and less flexible than those covering other types of pension plans. For example, compared with a single-employer plan, an employer covered by a multiemployer plan cannot easily adjust annual plan contributions in response to the firm’s own financial circumstances. This is because contribution rates are often fixed for periods of time by the provisions of the collective bargaining agreement. Collective bargaining itself, a necessary aspect of the multiemployer plan model and another factor affecting plans’ prospects, has also been in long-term decline, suggesting fewer future opportunities for new plans to be created or existing ones to expand. As of 2003, union membership, a proxy for collective bargaining coverage, accounted for less than 9 percent of the private sector labor force and has been steadily declining since 1953. Experts have identified other challenges to the future prospects of defined benefit plans generally, including multiemployer plans. These include the growing trend among employers to choose defined contribution plans over DB plans, including multiemployer plans; the continued growing life expectancy of American workers, resulting in participants spending more years in retirement, thus increasing pension benefit costs; and increases in employer-provided health insurance costs, which are increasing employers’ compensation costs generally, including pensions. Some factors raise questions about the long-term viability of multiemployer plans are specific to certain features of multiemployer plans themselves, including features of the regulatory framework that some employers may well perceive as less flexible and financially riskier than the features of other types of pension plans. For example, an employer covered by a multiemployer pension plan typically does not have the funding flexibility of a comparable employer sponsoring a single- employer plan. In many instances, the employer covered by the multiemployer plan cannot as easily adjust annual plan contributions in response to the firm’s own financial circumstances. Employers that value such flexibility might be less inclined to participate in a multiemployer plan. Employers in multiemployer plans may also face greater financial risks than those in other forms of pension plans. For example, an employer sponsor of a multiemployer plan that wishes to withdraw from the plan is liable for its share of pension plan benefits not covered by plan assets upon withdrawal from the plan, rather than when the plan terminates, as with a single-employer plan. Employers in plans with unfunded vested benefits face an immediate withdrawal liability that can be costly. In addition, employers in fully funded plans also face the potential of costly withdrawal liability if the plan becomes underfunded in the future through the actions of other sponsors participating in the multiemployer plan. Thus, an employer’s pension liabilities become a function not only of the employer’s own performance but also the financial health of other plan sponsors in the multiemployer plan. These additional sources of potential liability can be difficult to predict, increasing employers’ level of uncertainty and risk. Some employers may hesitate to accept such risks if they can sponsor other plans that do not have them, such as 401(k)-type defined contribution plans. The future growth of multiemployer plans is also predicated on the future of collective bargaining. Collective bargaining is an inherent feature of the multiemployer plan model. Collective bargaining, however, has been declining in the United States since the early 1950s. Currently, union membership, a proxy for collective bargaining coverage, accounts for less than 9 percent of the private sector labor force. In 1980, union membership accounted for about 19 percent of the entire national workforce and about 27 percent of the civilian workforce in 1953. Pension experts have suggested a variety of challenges faced by today’s defined benefit pension plans, including multiemployer plans. These include the continued general shift away from DB plans to defined contribution (DC) plans, and the increased longevity of the U.S. population, which translates into a lengthier and more costly retirement. In addition, the continued escalation of employer health insurance costs has placed pressure on the compensation costs of employers, including pensions. Employers have tended to move away from DB plans and toward DC plans since the mid-1980s. The total number of PBGC-insured defined benefit plans, including single employer plans, declined from 97,683 in 1980 to 31,135 in 2002. (See fig. 7.) The number of DC plans sponsored by private employers nearly doubled from 340,805 in 1980 to 673,626 in 1998. Along with this continuing trend toward sponsoring DC plans, there has also been a shift in the mix of plans that private sector workers participate in. Labor reports that the percentage of private sector workers who participated in a primary DB plan has decreased from 38 percent in 1980 to 21 percent by 1998, while the percentage of such workers who participated in a primary DC plan has increased from 8 percent to 27 percent during this same period. Moreover, these same data show that by 1998, the majority of active participants (workers participating in their employer’s plan) were in DC plans, whereas nearly 20 years earlier the majority of participants had been in DB plans. Experts have suggested a variety of explanations for this shift, including the greater risk borne by employers with DB plans, greater administrative costs and more onerous regulatory requirements, and that employees more easily understand and favor DC plans. These experts have also noted considerable employee demand for plans that state benefits in the form of an account balance and emphasize portability of benefits, such as is offered by 401(k)-type defined contribution pension plans. The increased life expectancy of workers also has important implications for defined benefit plan funding, including multiemployer plans. The average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise of 6.6 years from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. PBGC has noted that improvements in life expectancy have extended the average amount of time spent by workers in retirement from 11.5 years in 1950 to 18 years for the average male worker as of 2002. This increased duration of retirement has required employers with defined benefit plans to increase their contributions to match this increase in benefit liabilities. This problem is exacerbated for those multiemployer plans with a shrinking pool of active workers because plan contributions are generally paid on a per work-hour basis, contributing to the funding strain we discussed earlier. Increasing health insurance costs are another factor affecting the long- term prospects of pensions, including multiemployer pensions. Recent increases in employer-provided health insurance costs are accounting for a rising share of total compensation, increasing pressure on employers’ ability to maintain wages and other benefits, including pensions. Bureau of Labor Statistics data show that the cost of employer-provided health insurance has risen steadily in recent years, growing from 5.4 percent of total compensation in 1999 to 6.5 percent as of the third quarter of 2003. A private survey of employers found that employer-sponsored health insurance costs rose about 14 percent between the spring of 2002 and the spring of 2003, the third consecutive year of double-digit acceleration and the highest premium increase since 1990. Plan administrators and employer and union representatives that we talked with identified the rising costs of employer-provided health insurance as a key problem facing plans, as employers are increasingly forced to choose between maintaining current levels of pension and medical benefits. Although available evidence suggests that multiemployer plans are not experiencing anywhere near the magnitude of the problems that have recently afflicted the single-employer plans, there is cause for concern. The declines in interest rates and equities markets, and weak economic conditions in the early 2000s, have increased the financial stress on both individual multiemployer plans and the multiemployer framework generally. Most significant is PBGC’s estimate of $100 billion in unfunded multiemployer plan liabilities that are being borne collectively by employers and plan participants. At this time, PBGC and, potentially, the taxpayer do not face the same level of exposure from this liability with multiemployer plans that they do with single-employer plans. This is because, as PBGC officials have noted, the current regulatory framework governing multiemployer plans redistributes financial risk toward employers and workers and away from the government. Employers face withdrawal and other liabilities that can be significant. In addition, should a multiemployer plan become insolvent, workers face the prospect of receiving far lower guaranteed benefits than workers receive under PBGC’s single-employer program guaranteed limits. Together, not only do these features limit the exposure for PBGC, they create important incentives for all interested parties to resolve difficult financial situations that could otherwise result in plan insolvency. Because the multiemployer plans’ structure balances risk in a manner that fosters constructive collaboration among interested parties, proposals to address multiemployer plans’ funding stress should be carefully designed and considered for their long-term consequences. For example, proposals to shift plan liabilities to PBGC by making it easier for employers to exit multiemployer plans could help a few employers or participants but erode the existing incentives that encourage interested parties to independently face up to their financial challenges. In particular, placing additional liabilities on PBGC could ultimately have serious consequences for the taxpayer, given that with only about $25 million in annual income, a trust fund of less than $1 billion, and a current deficit of $261 million, PBGC’s multiemployer program has very limited resources to handle a major plan insolvency that could run into billions of dollars. The current congressional efforts to provide funding relief are at least in part in response to the difficult conditions experienced by many plans in recent years. However, these efforts are also occurring in the context of the broader long-term decline in private sector defined benefit plans, including multiemployer plans, and the attendant rise of defined contribution plans, with their emphasis on greater individual responsibility for providing for a secure retirement. Such a transition could lead to greater individual control and reward for prudent investment and planning. However, if managed poorly, it could lead to adverse distributional effects for some workers and retirees, including a greater risk of a poverty-level income in retirement. Under this transition view, the more fundamental issues concern how to minimize the potentially serious, negative effects of the transition while balancing risks and costs for employers, workers, and retirees, and for the public as a whole. These important policy concerns make Congress’s current focus on pension reform both timely and appropriate. This concludes my prepared statement. I am happy to answer any questions that the subcommittee may have. For further questions on this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Joseph Applebaum, Tim Fairbanks, Charles Jeszeck, Gene Kuehneman, Raun Lazier, and Roger J. Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Multiemployer defined benefit pension plans, which are created by collective bargaining agreements covering more than one employer and generally operated under the joint trusteeship of labor and management, provide coverage to over 9.7 million of the 44 million participants insured by the Pension Benefit Guaranty Corporation (PBGC). The recent termination of several large single-employer plans--plans sponsored by individual firms--has led to millions of dollars in benefit losses for thousands of workers and left PBGC, their public insurer, an $11.2 billion deficit as of September 30, 2003. The serious difficulties experienced by these single-employer plans have prompted questions about the health of multiemployer plans. This testimony provides information on differences between single employer and multiemployer pension plans, recent trends in the funding of multiemployer pension plans and worker participation in those plans, and factors that may pose challenges to the future prospects of multiemployer plans. GAO will soon release a separate report on multiemployer pension issues. The framework governing multiemployer plans generally places greater financial risk on employers and participants and less on PBGC than does PBGC's single-employer program. For example, in the event of employer bankruptcy, the remaining employers in the multiemployer plan assume additional funding responsibility. Further, PBGC's guaranteed participant benefit is much lower for multiemployer participants, and PBGC does not provide financial assistance until the multiemployer plan is insolvent. Following two decades of relative financial stability, many multiemployer plans appear to have suffered recent funding losses, while long-term declines in participation and plan formation continue. At the close of the 1990s, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities. Since then, stock market declines, coupled with low interest rates and poor economic conditions, have reduced assets and increased liabilities for many plans. In its 2003 annual report, PBGC estimated that underfunded multiemployer plans now face an aggregate unfunded liability of $100 billion, up from $21 billion in 2000. PBGC also reported an accumulated net deficit of $261 million for its multiemployer program in 2003, the first since 1981. Meanwhile, since 1980, there has been a steady decline in the number of plans, from over 2,200 plans to fewer than 1,700, and a 1.4 million decline in the number of active workers in plans. The long-term prospects of the multiemployer system face a number of challenges. Some are inherent in the multiemployer design and regulatory framework, such as the greater perceived financial risk and reduced flexibility for employers, compared with other plan types. The long-term decline of collective bargaining also results in fewer participants and employers available to expand or create new plans. Other factors that pose challenges include the growing trend among employers to choose defined contribution plans; the increasing life expectancy of workers, which raises the cost of defined benefit plans; and continuing increases in employer health insurance costs, which compete with pensions for employer funding.
OGAC establishes overall PEPFAR policy and program strategies and coordinates PEPFAR program activities. In addition, OGAC allocates PEPFAR resources from the Global Health and Child Survival account to PEPFAR implementing agencies, primarily CDC and USAID. The agencies execute PEPFAR program activities through agency headquarters offices and interagency teams consisting of PEPFAR implementing agency officials in the countries and regions with PEPFAR- funded programs (PEPFAR country and regional teams). OGAC coordinates these activities through its approval of operational plans, which serve as annual work plans and document planned investments in, and the anticipated results of, HIV/AIDS-related programs. OGAC provides annual guidance on how to develop and submit operational plans. In fiscal years 2009 through 2011, OGAC approved operational plans representing $11.7 billion in PEPFAR program activities. These activities fall primarily in three broad program areas—prevention, treatment, and care—and 18 related program areas. Program activities aimed at preventing HIV infection and at treating those infected each represented about 30 percent of approved PEPFAR funding, while activities aimed at caring for AIDS patients represented about 20 percent. The remaining approximately 20 percent funded a variety of other program areas, such as health systems strengthening and building laboratory infrastructure. Figure 1 summarizes approved funding for these program areas in fiscal years 2009 through 2011. To carry out activities in these program areas, CDC and USAID use implementing mechanisms—grants, cooperative agreements, and contracts—with a variety of implementing partners. These partners include partner country governments, nongovernmental and international organizations, and academic institutions. CDC and USAID used more than 3,000 implementing mechanisms in fiscal years 2008 through 2010. CDC and USAID offices employ a wide variety of individuals and organizations to conduct PEPFAR evaluations, including implementing agency officials, consultants, and academic institutions as well as partner government organizations and implementing partners. Evaluation teams sometimes comprise representatives from several of these organizations. OGAC coordinates, and PEPFAR implementing agencies also engage in, several related activities that support evaluation, such as oversight of implementing partners, routine performance planning and reporting, biological and behavioral health surveillance, baseline studies and needs assessments, and development of health management information systems. PEPFAR evaluations are subject to common evaluation standards defined in various agency-specific and governmentwide guidance. This guidance includes CDC’s Framework for Program Evaluation in Public Health and USAID’s evaluation policy and Automated Directives System guidance. In addition, GAO published guidance on designing evaluations and assessing social program impact evaluations. Also, in September 2010, the AEA published a framework to guide the development and implementation of federal agency evaluation programs and policies. The framework offers a set of general principles intended to facilitate the integration of evaluation activities with program management. These principles include developing evaluation policies and procedures; developing evaluation plans; ensuring independence of evaluators in designing, conducting, and determining findings of their evaluations; ensuring professional competence of evaluators; and disseminating evaluation results publicly and in a timely fashion. OGAC, CDC, and USAID managed and conducted evaluations of a wide variety of PEPFAR programs that were ongoing during fiscal years 2008 through 2010. However, we found that many of these evaluations— particularly evaluations managed by PEPFAR country and regional teams—did not consistently adhere to common evaluation standards, in many cases calling into question the evaluations’ support for their findings, conclusions, and recommendations. OGAC, CDC, and USAID provided 496 evaluations addressing programs ongoing during fiscal years 2008 to 2010 in all PEPFAR program areas relating to HIV/AIDS treatment, prevention, and care. Of these 496 evaluations, 18 were public health evaluations (PHE), managed by OGAC; 42 were program evaluations provided by CDC and USAID headquarters officials; and 436 were program evaluations provided by CDC and USAID country and regional team officials. (For more information about these evaluations, see app. III.)  OGAC-managed evaluations. OGAC provided 18 PHEs that CDC and USAID had completed as of November 2011 under an OGAC- managed approval, implementation review, and reporting process. The completed PHEs addressed the following program areas: prevention of mother-to-child transmission, testing and counseling, adult care and support, adult treatment, sexual prevention, and pediatric care and support. In addition, OGAC indicated that 82 other PHEs had been initiated as of November 2011. According to OGAC, PHEs are intended to assess the effectiveness and impact of PEPFAR programs; compare evidence-based program models in complex health, social, and economic contexts; and address operational questions related to program implementation within existing and developing health systems infrastructures. OGAC guidance states that these evaluations focus on strategies to increase program efficiency and impact to guide program development and inform the public, using rigorous quantitative or qualitative methods that permit broad generalization. For all PHEs, OGAC requires PEPFAR country and regional teams to submit evaluation concepts or protocols for approval by an interagency subcommittee and requires periodic progress and closeout reports.  CDC and USAID headquarters-managed evaluations. CDC headquarters officials provided 20 evaluations in the following program areas: blood safety, injection safety, adult treatment, pediatric treatment, and strategic information. USAID headquarters officials provided 22 evaluations in the following program areas: abstinence/be faithful, sexual prevention, orphans and vulnerable children, strategic information, and health systems strengthening programs. Four CDC and USAID headquarters evaluations addressed more than one program area.  Country and regional team-managed evaluations. CDC and USAID officials representing 31 PEPFAR country and 3 regional teams provided a total of 436 evaluations; CDC officials provided 185 evaluations, and USAID officials provided 251 evaluations. The evaluations addressed 18 program areas related to PEPFAR prevention, treatment, and care, with about one-fifth of the evaluations addressing activities in more than one program area (see fig. 2). CDC and USAID officials also provided copies of evaluation protocols and statements of work, indicating that additional evaluations had been initiated. Further, based on our analysis of a randomly selected sample of 78 evaluations, we estimate that 51 percent of the evaluations used qualitative methods, 35 percent used quantitative methods, and 14 percent used a mix of quantitative and qualitative methods. In addition, evaluations provided by USAID tended to employ qualitative methods (32 of 48 evaluations), while those provided by CDC tended to use quantitative methods (20 of 30 evaluations). (See app. III for additional results of our analysis.) Our assessments of judgmental and randomly selected samples of PEPFAR evaluations indicate that many—particularly those managed by PEPFAR country and regional teams—contain findings, conclusions, and recommendations that are not fully supported. To determine the extent to which these elements are supported, we synthesized our assessments of the extent to which evaluations generally adhered to several common evaluation standards defined in guidance issued by CDC, USAID, and GAO. Specifically, we considered whether the evaluations describe the program to be evaluated and its objectives, the purpose of the evaluation, and the criteria used to reach conclusions about the achievement of the program’s objectives. We also considered the extent to which evaluations incorporate appropriate designs, sample selection methods, measures, and data collection and analysis methods. All OGAC-managed PHEs that we reviewed generally adhered to these standards and thus their findings, conclusions, and recommendations were fully supported. We found similar results for most CDC and USAID headquarters’ program evaluations we reviewed. However, PEPFAR country and regional teams’ evaluations did not consistently adhere to common evaluation standards, and thus, in most cases, their findings, conclusions, and recommendations were not fully supported. OGAC-managed evaluations. Our assessment of seven OGAC- managed PEPFAR PHEs indicates that they all generally adhered to common evaluation standards, and thus their findings, conclusions, and recommendations were fully supported. All of the evaluations that we reviewed identified program and evaluation objectives and used appropriate measures, and most used appropriate evaluation designs and data collection and analysis methods. Three of the evaluations employed fully appropriate sampling methods. Table 1 summarizes our assessments of these evaluations. CDC and USAID headquarters-managed evaluations. Our assessment of 15 CDC and USAID headquarters-managed evaluations indicates that most generally adhere to common evaluation standards. As a result, we found that findings, conclusions, and recommendations were fully supported in 9 evaluations and partially supported in 6 evaluations. Most of the evaluations employed appropriate evaluation designs, measures, and data collection and analysis methods. However, 7 evaluations did not fully identify the evaluation criteria, and 8 did not employ fully appropriate sampling methods. Table 2 summarizes our assessments of these evaluations. Country and regional team-managed evaluations. We found that evaluations managed by country and regional teams, which make up the bulk of all PEPFAR program evaluations, did not consistently adhere to common evaluation standards. Based on our analysis of a randomly selected sample of country and regional team evaluations, we estimate that findings, conclusions, and recommendations were fully supported in 41 percent of all evaluations provided to us by country and regional teams, partially supported in 44 percent of these evaluations, and not supported in 15 percent of these evaluations. We estimate that 24 percent of these evaluations did not identify any evaluation criteria, and more than half did not employ evaluation designs, sampling methods, measures, or data collection and analysis methods that were fully appropriate. For example, an evaluation of activities for providing care to orphans and vulnerable children drew conclusions about results and made recommendations, based almost exclusively on favorable anecdotal information collected from selected program participants and beneficiaries. As a result, the objectivity and credibility of these evaluations’ findings, conclusions, and recommendations are in question. Table 3 summarizes our assessments of these evaluations. Further analysis of the results of our assessments showed that evaluations using qualitative methods were more likely to contain results that were partially supported or not supported than evaluations using quantitative methods. (See app. III for additional results of our analysis.) State, OGAC, CDC, and USAID have developed policies and procedures that apply to evaluations of PEPFAR programs, as called for in the AEA Roadmap. However, they have not fully adhered to other AEA Roadmap principles regarding evaluation planning, independence and competence of evaluators, and dissemination of evaluation results. First, OGAC has not developed PEPFAR evaluation plans at the program level or required the development of such plans in individual countries and regions, limiting its own ability to ensure that evaluation resources are appropriately targeted. Second, State, OGAC, CDC, and USAID guidance does not specify how to document the independence and competency of evaluators, and almost half of the evaluations we reviewed did not provide sufficient information to fully determine whether evaluators were free of conflicts of interest. Finally, not all evaluation reports are available online, thus limiting their accessibility and usefulness to PEPFAR decision makers and other stakeholders. In accordance with AEA principles, State, OGAC, CDC, and USAID have issued policies and procedures that are applicable to PEPFAR program evaluation.  State evaluation policy. In February 2012, State’s Bureau of Resource Management issued an evaluation policy that applies to all State bureaus and OGAC. The policy provides a framework for implementing evaluations of State’s various programs and projects and encourages evaluations for programs and projects at all funding levels.  OGAC operational plan guidance. According to OGAC officials, OGAC generally has deferred to implementing agency policies. OGAC also issues annual guidance to PEPFAR implementing agencies for preparation of their operational plans. OGAC’s fiscal year 2012 operational plan guidance to PEPFAR country and regional teams, issued in August 2011, addresses some elements of evaluation. The guidance differentiates three types of evaluation and research: basic program evaluation, which focuses on descriptive and normative evaluation questions; operations research, which focuses on program delivery and optimal allocation of resources; and impact evaluation, which measures the change in an outcome attributable to a particular program.  CDC evaluation framework. In September 1999, the Program Evaluation Unit at CDC’s Office of the Associate Director for Program issued an evaluation framework for CDC programs. The framework summarizes essential elements of program evaluation, clarifies program evaluation steps, and reviews standards for effective program evaluation, among other things. According to CDC’s Chief Evaluation Officer, as of May 2012, CDC plans to issue evaluation guidelines and recommendations as well as additional guidance for using the evaluation framework.  USAID evaluation policy. In January 2011, USAID’s Bureau for Policy, Planning, and Learning revised evaluation policy to supplement existing evaluation guidance in USAID’s Automated Directive System. According to USAID, this revised policy was intended to address a decline in the quantity and quality of evaluation practice within the agency in the recent past. The policy clarifies for USAID staff, partners, and stakeholders the purposes of evaluation; the types of evaluations that are required and recommended; and USAID’s approach for conducting, disseminating, and using evaluations. Among other things, the policy sets forth the purposes of evaluation, the roles and responsibilities of USAID operating units, and evaluation requirements and practices for all USAID programs and projects. The policy requires all USAID operating units to consult with program office experts to ensure that scopes of work for external evaluations meet evaluation standards. The policy also states that operating units, in collaboration with the program office, must ensure that evaluation draft reports are assessed for quality by management and through an in-house peer technical review. OGAC has not yet developed a program-level PEPFAR evaluation plan or required implementing agencies or country and regional teams to develop evaluation plans as called for by the AEA Roadmap.  OGAC. State’s recently issued evaluation policy requires that each State bureau, including OGAC, develop and submit a bureauwide evaluation plan that encompasses major policy initiatives and new programs as well as existing programs and projects. According to a senior OGAC official, at the time of our review, OGAC was discussing with State’s Bureau of Resource Management how it will comply with this new requirement.  CDC and USAID headquarters. OGAC defers to implementing agencies to plan evaluations of their headquarters-managed PEPFAR program activities, but CDC and USAID have not developed evaluation plans for such activities included in recent headquarters operational plans. OGAC’s 2011 guidance for developing the headquarters operational plan requires a plan for technical area program priorities but does not address evaluation planning. Similarly, the fiscal year 2012 guidance does not include a requirement for an evaluation plan.  Country and regional teams. OGAC defers to PEPFAR country and regional teams to plan evaluations of their program activities, but does not require that the teams develop and submit annual evaluation plans. OGAC’s 2011 guidance on developing country and regional operational plans urges country and regional teams to prioritize program evaluation in order to make PEPFAR programs more effective and sustainable. In addition, OGAC’s fiscal year 2012 guidance calls for country and regional teams to address monitoring and evaluation in describing individual implementing partners’ activities. However, neither the 2011 guidance nor the 2012 guidance instructs all country teams to develop evaluation plans. We reviewed PEPFAR country and regional operational plans for fiscal year 2011 and found that they did not include evaluation plans. Instead, these documents generally included (1) descriptions of ongoing or planned evaluations and related activities (e.g., surveillance) in program area narrative summaries and (2) descriptions of monitoring and evaluation activities in implementing partner activity narratives. In our analysis of information provided by country and regional teams, as well as CDC and USAID headquarters, we did not detect an evaluation rationale or strategy. Based on responses to our survey of CDC and USAID officials in 31 PEPFAR country and 3 regional teams, we calculated that evaluations had been conducted or were ongoing for about one-third of these countries’ program activities in fiscal years 2008 through 2010. In addition, based on these officials’ responses, we found similar percentages of ongoing and completed evaluations across the broad program areas of prevention, treatment, and care. We also analyzed CDC and USAID headquarters officials’ responses to our survey and found that evaluations had been conducted or were ongoing for about half of the PEPFAR program activities managed by agencies’ headquarters and implemented during fiscal years 2008 to 2010. However, we found no relationships between the percentages of program activities with ongoing or completed evaluations and budgets at the country, program area (i.e., prevention, treatment, or care), or program activity levels. State, CDC, and USAID policies and procedures address the independence of evaluators but do not consistently require that evaluation reports identify the evaluation team or address whether there are any potential conflicts of interest. In addition, some agency policies and procedures address the need to ensure that evaluators have appropriate qualifications, but none require that evaluations document those qualifications or certify that they are adequate.  State. State’s recently issued evaluation policy addresses evaluator independence and integrity, stating that evaluators should be free from program managers and not subject to their influence. This policy does not address evaluator qualifications.  OGAC. OGAC’s operational plan guidance to country and regional teams does not address the independence or professional qualifications of evaluators. According to OGAC officials, OGAC defers to implementing agency evaluation policies.  CDC. CDC’s evaluation framework addresses the need to assemble an evaluation team with the needed competencies, highlighting the importance of ensuring that evaluators have no particular stake in the results of the evaluation. The CDC evaluation framework also discusses appropriate ways to assemble an evaluation team.  USAID. USAID’s evaluation policy recommends that most evaluations be external and requires a disclosure of conflicts of interest for all evaluation team members. In addition, USAID’s evaluation policy requires that evaluation-related competencies be included in staffing selection policies. Our analysis of a randomly selected sample of evaluations submitted by 31 PEPFAR country and 3 regional teams found that the evaluations often did not address whether evaluators have potential conflicts of interest, as called for by the AEA Roadmap. We estimate that 27 percent of the evaluations fully addressed potential conflicts of interest, 59 percent partially addressed the issue, and 14 percent did not address the issue. In addition, while we were unable to determine whether potential conflicts of interest existed with the information provided in some of the evaluation reports, it appeared that there were evaluations in which potential conflicts of interest existed but were not addressed. For example, one evaluation report, relating to strengthening a partner country’s nongovernmental HIV/AIDS organizations, indicated that the evaluation team was employed by the program activity’s implementing partner, but the report did not address potential conflict of interest. Furthermore, some country and regional program evaluations sometimes did not provide enough identifying information about evaluators to allow an assessment of evaluator independence or qualifications. We estimate that 86 percent of the evaluations fully identified the evaluators, while 14 percent provided either partial or no information. For example, an evaluation report we reviewed relating to HIV prevention program activities in one region named the organization that conducted the evaluation but did not provide any information on the evaluation team members. Moreover, we were unable to find any information about this organization in an online search based on the limited information available in the report. Agency policies and procedures generally support dissemination of evaluation results, but OGAC, CDC, and USAID have not ensured that evaluation methods, data, and evaluation results are made fully and easily accessible to the public.  State. State’s newly released evaluation policy requires bureaus to submit evaluations to a central repository.  OGAC. OGAC officials told us that the office supports dissemination of the results of important global HIV/AIDS research and evaluations to a variety of stakeholders. For example, OGAC officials noted that the PEPFAR website contains information on PEPFAR results as well as monitoring and evaluation guides. OGAC officials also noted that dissemination strategies are a common component of evaluation protocols and the procurement mechanisms that fund them. In addition, OGAC maintains an intranet site, which is accessible to PEPFAR implementing agency officials and contains information about evaluation. However, OGAC does not have a mechanism for publicly and systematically disseminating evaluation results.  CDC. CDC policy advises that effort is needed to ensure that evaluation findings are disseminated appropriately but does not require online dissemination of evaluation reports. CDC officials told us that they recently made changes to CDC’s public website, which, as of April 2012, includes some information on program evaluations. In addition, CDC’s Division of Global HIV/AIDS (DGHA) Science Office maintains a catalog of published journal articles coauthored by DGHA officials. However, CDC does not maintain a complete online inventory of evaluations.  USAID. USAID’s policy states that evaluation findings should be shared as widely as possible with a commitment to full and active disclosure. USAID requires submission of completed evaluations to the Development Experience Clearinghouse (DEC), the agency’s online repository of research documentation, but does not enforce this requirement. In 2010, USAID reported that practices for disseminating evaluation results were generally limited, that dissemination practices varied across the agency, and that the requirement to submit completed evaluations to the DEC had not been fully enforced. Additionally, USAID found that documents in the DEC were sometimes difficult to find. In February 2012, USAID also found that missions had reported submitting only 20 percent of their evaluations to the DEC in fiscal year 2009. Although documents submitted by 31 PEPFAR country and 3 regional teams showed that CDC and USAID have disseminated evaluation findings within these countries and regions in several ways, we found no publicly accessible and easily searchable Internet source for PEPFAR program evaluations. We received abstracts from annual meetings and conferences, presentations to partner government officials and stakeholders, published journal articles, and periodic agency reports, which may be publicly accessible via the Internet. However, as of the time of our review, our searches of five key websites generated far fewer PEPFAR evaluations than the 496 evaluations we received from country teams, CDC and USAID headquarters, and OGAC. We searched PubMed, the U.S. National Library of Medicine’s online database, but a search using “PEPFAR” and “evaluation” as search terms generated seven results. Likewise, as of April 2012, our search of USAID’s DEC, using “HIV/AIDS” and “evaluation” as search terms, generated 87 results, including some that were not evaluations, but USAID officials, in response to our request, later provided us nearly 300 evaluations. We also found some evaluations at two USAID-maintained websites, OVCsupport.net and AIDStar-One, but neither site was comprehensive or fully searchable. In addition, a website called Global HIV M&E Information provides a repository of voluntarily submitted monitoring and evaluation resources; however, we found few evaluations of PEPFAR programs. PEPFAR’s authorizing legislation emphasizes the importance of program evaluation as a tool for OGAC to ensure, among other things, that funds are spent on programs that show evidence of success. State, CDC, and USAID have demonstrated a clear commitment to program evaluation by conducting a wide variety of program evaluations that address at least one activity in each PEPFAR program area. However, many evaluations managed by PEPFAR country and regional teams lack fully supported findings, conclusions, and recommendations, evidenced by a lack of general adherence to common evaluation standards. Without fully supported findings, conclusions, and recommendations, these PEPFAR program evaluations have limited usefulness as a basis for decision making and may supply incomplete or misleading information for managers’ and stakeholders’ efforts to direct PEPFAR funding to programs that produce the desired outcomes and impacts. State, CDC, and USAID have demonstrated their commitment to program evaluation by developing policies and procedures that apply to evaluations, in accordance with established general principles. However, without a requirement that country and regional teams prepare and submit annual evaluation plans—for example, as a component of operational plans—OGAC is unable to ensure that program activities are subject to appropriate levels of evaluation. Moreover, without documentation of the independence and competence of PEPFAR program evaluators, OGAC, agency program managers, and other stakeholders have limited assurance that evaluation results are unbiased and credible. Finally, unless evaluation results are publicly and systematically disseminated and made easily searchable online, program officials and public health researchers may be unable to assess the credibility of their findings or use them for program decision making. We recommend that the Secretary of State direct the U.S. Global AIDS Coordinator to take the following four actions in collaboration with CDC and USAID to enhance PEPFAR evaluations: 1. develop a strategy to improve PEPFAR implementing agencies’ and country and regional teams’ adherence to common evaluation standards; 2. require implementing agency headquarters and country and regional teams to include evaluation plans in their annual operational plans; 3. provide detailed guidance for implementing agencies and country and regional teams on assessing, ensuring, and documenting the independence and competence of PEPFAR program evaluators; and 4. increase the online accessibility of PEPFAR program evaluation results. We provided a draft of this report to State, HHS’s CDC, and USAID. Responding jointly with CDC and USAID, State OGAC provided written comments (see app. IV). CDC and USAID also provided technical comments, which we incorporated as appropriate. In its written comments, State agreed with our recommendations and, emphasizing the interagency nature of the PEPFAR program, indicated that it will coordinate with PEPFAR agencies to implement our recommendations. First, State explained that it will work with PEPFAR implementing agencies to carry out the agencies’ evaluation policies and practices, which State deemed generally consistent with AEA principles, and will develop strategies to ensure the appropriate application of common evaluation standards. Second, State responded that it will work through PEPFAR interagency processes to develop PEPFAR program evaluation plans, which it noted could be included in annual PEPFAR operational plans. Third, State will work with PEPFAR implementing agencies to put in place guidance to document program evaluators’ independence and qualifications. Fourth, State affirmed that OGAC will collaborate with PEPFAR implementing agencies to develop strategies for improving dissemination of evaluation results and will use PEPFAR’s public website to link to agencies’ online resources. We are sending copies of this report to the Secretary of State, the Office of the U.S. Global AIDS Coordinator, U.S. Agency for International Development’s Office of HIV/AIDS, the Department of Health and Human Services’ Office of Global Affairs, the Centers for Disease Control and Prevention’s Division of Global HIV/AIDS, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report (1) identifies President’s Emergency Plan for AIDS Relief (PEPFAR) evaluation activities and examines the extent to which evaluation results are supported and (2) examines the extent to which PEPFAR policies and procedures adhere to established principles for the evaluation of U.S. government programs. To identify PEPFAR program evaluations and examine the extent to which they generated supported evaluation results, we collected and analyzed program evaluation documents provided by Centers for Disease Control and Prevention (CDC) and U.S. Agency for International Development (USAID) officials in the 31 PEPFAR countries and 3 regions with PEPFAR country or regional operational plans in fiscal year 2010, as well as the Department of State’s (State) Office of the U.S. Global AIDS Coordinator (OGAC) and CDC and USAID headquarters officials. To examine the extent to which PEPFAR program evaluation policies and procedures adhered to principles in the American Evaluation Association’s (AEA) An Evaluation Roadmap for a More Effective Government (AEA Roadmap), we reviewed the general principles for conducting federal government program evaluations, as well as OGAC, State, USAID, and CDC policies and guidance. In addition, we surveyed CDC and USAID officials in the 31 PEPFAR countries and 3 regions with PEPFAR annual country or regional operational plans in fiscal year 2010, as well as CDC and USAID headquarters officials, regarding ongoing and completed evaluations. Finally, we conducted interviews with OGAC, CDC, and USAID officials in Washington, D.C., and Atlanta, Georgia. To survey PEPFAR country and regional team officials, we took the following steps: 1. We consulted with OGAC and CDC and USAID headquarters officials and decided to use implementing mechanism as a proxy for a program activity. We determined that using implementing mechanisms was the only viable unit of analysis to estimate the percentage of PEPFAR programs with evaluations because (1) OGAC officials maintained updated data on implementing mechanisms and (2) PEPFAR officials regularly used and understood data on implementing mechanisms. However, in some of these cases, if the broader program was evaluated, not all implementing mechanisms under the larger program were necessarily evaluated. We also recognized that evaluations may not be appropriate for all implementing mechanisms (such as those that provide funding for staffing costs). To the extent possible, we eliminated these implementing mechanisms from our analysis. 2. We obtained lists of program activities for fiscal years 2008 through 2010 from OGAC for each country and region. We then analyzed program activities by country (or region) and agency; the lists included identification numbers, names, and partner names for each of the program activities. Each survey tool then contained a list of program activities relevant to the country or regional team. 3. Based on GAO and OGAC guidance, we developed the following working definition of evaluation: Evaluations are systematic studies to assess how well a program is working. Evaluations are often conducted by experts external to the program, either inside or outside the agency. Types of evaluations include process, outcome, impact, or cost-benefit analysis. 4. We developed a survey tool for ongoing and completed evaluations of PEPFAR programs. We consulted with OGAC and CDC and USAID headquarters officials about the survey tool and made revisions as appropriate. For example, based on input from CDC and USAID headquarters officials, we determined that some PEPFAR evaluations could address several implementing mechanisms. In addition, in some of these cases, if a broader program (e.g., national treatment program) was evaluated, not all implementing mechanisms under the broader program were necessarily evaluated. In response, we included questions in our survey prompting PEPFAR officials to indicate whether an implementing mechanism has been evaluated as part of a broader evaluation of several implementing mechanisms. 5. We tested the survey tool with officials in two PEPFAR countries— Angola and Ethiopia—and finalized the survey tool based on discussions with these officials. 6. We sent the final survey tool to PEPFAR country contacts (PEPFAR coordinators and CDC and USAID officials) identified by OGAC and CDC and USAID headquarters. The survey tool instructed CDC and USAID country or regional team officials to provide “yes” or “no” responses to the following questions for each implementing mechanism in the country’s (or region’s) agency-specific lists: Is this one of your agency’s fiscal year 2008-2010 country or regional operational plan program activities?  Has at least one evaluation specific to this implementing mechanism been completed? Is at least one evaluation specific to this implementing mechanism ongoing?  Has at least one evaluation covering, but broader than, this implementing mechanism been completed? Is at least one evaluation covering, but broader than, this implementing mechanism ongoing? We also prompted the country or regional officials to provide additional information for each implementing mechanism, such as explanations for program activities that do not belong to the agency and identification of duplicate program activities. Officials were instructed to either e-mail the completed surveys to GAO or upload them to a website regularly used by OGAC and country and regional teams for submitting and sharing planning and reporting documents. In some cases, we met with country or regional team officials via telephone, or corresponded via e-mail, to clarify the purpose of the survey, the questions themselves, and the evaluation document request as well as to correct anomalies and ask follow-up questions. One GAO analyst also attended the May 2011 PEPFAR implementing agency annual meeting in Johannesburg, South Africa, to provide information about the survey and evaluation document request to PEPFAR country and regional team officials also attending the annual meeting. We received responses from all 31 PEPFAR countries and 3 regions with fiscal year 2010 operational plans. Using a similar survey tool, we also conducted surveys of CDC and USAID headquarters officials regarding program activities managed by agency headquarters and listed in PEPFAR headquarters operational plans for 2008 through 2010. To analyze country and regional teams’ survey responses, we made the following assumptions regarding the survey responses: If officials did not provide a response to the question “Is this one of your agency’s fiscal year 2008-2010 country or regional operational plans program activities?” we included that implementing mechanism in the analysis. Program activities with responses of “no” or “duplicate” were eliminated from the analysis. If officials did not respond to any of the four questions regarding ongoing or completed evaluations, we assumed that there were no ongoing or completed evaluations for that implementing mechanism. In addition, we reviewed narrative comments provided by country and regional team officials. We recognized that evaluations may not be appropriate for all implementing mechanisms (such as those that provide funding for staffing costs). To the extent possible, we eliminated these implementing mechanisms from our analysis. Based in part on our review of the narrative comments, we flagged and eliminated implementing mechanisms with evidence indicating that the implementing mechanism was either “to be determined” (i.e., the agency had yet to make an award to an implementing partner), related to staffing costs, related to strategic information and monitoring and evaluation, recently begun, a duplicate of another implementing mechanism, or listed in error. Once the survey responses were ready for analysis, we calculated the summary statistics that are reported in the body of the report. We also included the survey responses provided by officials in CDC and USAID headquarters in the analysis. To check the reliability of the data analysis, a second independent analyst reviewed the statistical programs used to analyze the data for accuracy. In addition to our survey of CDC and USAID officials in the 31 countries and 3 regions with fiscal year 2011 operational plans, we requested program evaluation documents. To do this, the survey tool instructions prompted CDC and USAID officials to provide documentation of completed and ongoing evaluations. Specifically, for implementing mechanisms where officials indicated that at least one evaluation had been completed, we requested documentation—such as an evaluation report—of all such completed evaluations. For implementing mechanisms where officials indicated that at least one evaluation was ongoing, we requested documentation—such as terms of work or an evaluation plan. We generally advised country and regional team officials to err on the side of inclusion when in doubt about whether to submit documentation of ongoing and completed evaluations. We instructed these officials to e- mail, or, in some cases, mail electronic versions of the program evaluation documents to GAO, or to upload them to a website regularly used by OGAC and country and regional teams for submitting and sharing planning and reporting documents. In response to this document request, we received more than 1,350 documents. For example, we received documentation of ongoing or planned evaluations, such as statements of work or evaluation protocols and protocol approval forms. We also received meeting minutes, trip reports, financial review and audit documents, presentation slides, abstracts, and conference posters. To determine which documents met our definition of evaluation, we reviewed each of these documents and categorized them as meeting the definition of evaluation or not, following a set of decision rules. For example, we included data quality assessments, costing studies that compared costs and explained cost differences, and analyses of surveillance data pre- and postintervention. We excluded surveillance studies that simply reported the results of a surveillance activity (but did not link it to a specific program or intervention); needs assessments, baseline studies, and situation analyses; trip and site visit reports; and pre- and postevent (e.g., workshop) questionnaires or surveys. We identified and eliminated duplicate documents. This categorization was checked by a second analyst and yielded 436 program evaluations. We believe that this final set of evaluations constitutes an essentially full universe of PEPFAR country and regional program evaluation documents. In addition to the program evaluation documents collected from CDC and USAID officials in PEPFAR countries and regions, we requested documents from OGAC related to PEPFAR public health evaluations. We also requested evaluation documents related to PEPFAR program managed by CDC and USAID headquarters from officials at each agency’s headquarters. OGAC provided copies of 18 completed public health evaluations, CDC headquarters provided copies of 22 completed evaluations, and USAID headquarters provided copies of 24 completed evaluations. We reviewed the program evaluation documents submitted by PEPFAR country and regional teams as well as CDC and USAID headquarters officials. We identified whether each program evaluation was ongoing or completed as well as which program area or areas (e.g., prevention, treatment, care, or other) were evaluated. To do this, we used program categories defined by OGAC’s fiscal years 2011 operational plan guidance, resulting in the program areas and related areas reported in the report. This categorization was checked by a second analyst. Table 4 provided descriptions of the PEPFAR program areas. To determine the degree to which these evaluations were conducted in adherence with common evaluation standards, we used an assessment tool to systematically conduct in-depth analyses of a probability sample of the evaluations submitted by the PEPFAR country and regional teams and a nonprobability sample of the evaluations submitted by OGAC and CDC and USAID headquarters officials. Our PEPFAR evaluation assessment tool was based on an assessment tool used for a prior GAO report, which we updated using guidance on evaluation from USAID, CDC, the Organization for Economic Cooperation and Development (OECD), and GAO. We piloted the assessment tool with three PEPFAR program evaluation documents provided by CDC and USAID headquarters officials and revised the evaluation assessment as appropriate. After piloting and revising the tool, we finalized the tool and used it to conduct the in-depth analyses of program evaluation documents. Table 5 lists the questions and supporting questions included in the assessment tool. To allow us to generalize to the entire set of evaluations provided by PEPFAR country and regional teams, we randomly selected a sample of 84 of 436 evaluations submitted by CDC and USAID officials in 31 PEPFAR countries and 3 regions. The list of all evaluations was sorted by total approved operational plan budgets for each country or region for fiscal years 2008 through 2010, so that a systematic sample would ensure representation of countries with relatively large, medium, and small budgets for fiscal years 2008 through 2011. After sampling, 6 evaluations—including, for example, baseline and feasibility studies—were found to be out of scope, resulting in a final sample of 78. Results based on random probability samples are subject to sampling error. The sample we drew for our survey is only one of a large number of samples we might have drawn. Because different samples could have provided different estimates, we express our confidence in the precision of our particular sample results as a 95 percent confidence interval. This is the interval that would contain the actual population values for 95 percent of the samples we could have drawn. The margin of error associated with proportion estimates is no more than plus or minus 11 percentage points at the 95 percent level of confidence and estimates of totals have a margin of error no larger than 44 evaluations. For the 18 public health evaluations submitted by OGAC, as well as the 20 and 22 evaluations submitted by CDC and USAID headquarters, respectively, we selected a nonprobability sample based on the type of program (e.g., prevention, treatment, care, or other) evaluated as well as country or countries addressed by each evaluation. Because this is a nonprobability sample, the results of our assessments of these evaluations cannot be used to make inferences about all evaluations managed by OGAC and CDC and USAID headquarters. However, they do represent a mix of the types of evaluations managed by OGAC and CDC and USAID headquarters. Using our evaluation assessment tool, we conducted in-depth analyses of the evaluation documents submitted by the PEPFAR country and regional teams and also those submitted by OGAC, USAID, and CDC headquarters. To do so, one analyst conducted an initial review of the evaluation document and then completed the evaluation assessment tool. The analyst also recorded basic information about each evaluation, including title, author, date of publication, and the country or countries included in the evaluation. For each of the questions in the assessment tool (see table 1), analysts were instructed to (1) respond using “yes,” “no,” “partial,” “not sure,” or “not applicable” and (2) summarize or cite relevant information from the evaluation documents. Analysts then were instructed to weigh the evidence and answers to these questions and provide “yes,” “no,” “partial,”, “not sure,” or “not applicable” responses for each category. Based on the analysis of the elements addressed in the assessment tool, analysts determined the extent to which each evaluation’s findings, conclusions, and recommendations were supported using “yes,” “no,” “partial,” or “not sure” as their responses. This overall determination was not based on a tally of responses to individual elements in the evaluation assessment tool, but rather a synthesis of these responses and an assessment of the contribution of each element to the overall support for the evaluation’s findings, conclusions, and recommendations. To help ensure consistency in the application of the standards and questions, the assessors met weekly during the assessment period to clarify the instructions and discuss their observations. After each assessment was complete, a second analyst independently verified the results of the analysis by reviewing the program evaluation document and the completed evaluation assessment tool. In cases where the two analysts did not concur on the results, or where there was a “not sure” response, they met to discuss the evidence and documented a final determination. All the results for the evaluation assessment tools were then entered into a spreadsheet and analyzed. To assess potential associations between key attributes of the sample of 78 evaluations we randomly selected, we calculated chi-square tests and the associated odds ratios for all pairs of the following variables: agency, methods used, evaluation type, and program type. Key results from these analyses are presented in the report. Additional results can be found in appendix III. We also employed logistic regressions to assess which of these variables (i.e., agency, methods used, evaluation type, and program type) had the strongest effects on the extent to which sampled evaluations contained support for findings, conclusions, and recommendations. To assess State, OGAC, CDC, and USAID evaluation policies, we developed an assessment tool based on nine AEA Roadmap principles. For each principle, we developed a question or series of questions asking how the policies addressed the AEA Roadmap principles. One analyst reviewed each agency’s policy and filled out the tool by citing evidence that would support the policy’s consistency with the AEA Roadmap principle, or a conclusion that no evidence could be found to support adherence to the principle. The analyst then concluded whether the policy was consistent with each principle assessed. A second analyst conducted a review of the completed assessment tools and either concurred with or disputed the conclusion for each principle. In cases where the two analysts did not concur, they met to discuss the evidence and made a final determination. To determine the extent to which operational plans contained evaluation plans, we reviewed OGAC’s fiscal year 2011 and 2012 annual guidance to implementing agency headquarters regarding development of the annual PEPFAR headquarters operational plan. We documented instances where the guidance addressed program evaluation and determined whether it constituted instructions to develop an evaluation plan. We conducted similar analysis of OGAC’s fiscal year 2011 and 2012 annual guidance to PEPFAR country and regional teams to identify instances where the guidance addressed evaluation and, finally, to determine whether the guidance constituted instructions for developing evaluation plans. In addition, we assessed 11 of the 33 country operational plans and 2 of the 3 regional operational plans submitted to OGAC for fiscal year 2011, the most recent year in which plans were available. We documented instances where these operational plans discussed evaluation and whether they contained evaluation plans. To determine the extent to which the program evaluations documented potential conflicts of interest and the identity of evaluators, we included questions on these two elements in our evaluation assessment tool. Analysts were instructed to respond using “yes,” “no,” or “partial” to these questions and to cite relevant evidence. After each assessment was complete, a second analyst verified the results of the analysis by reviewing the program evaluation document and the completed evaluation assessment tool. In cases where the two analysts did not concur on the results, they met to discuss the evidence and documented a final determination. All the results for the evaluation assessment tools were then entered into a spreadsheet and analyzed. We searched five Internet databases referenced by OGAC, CDC, and USAID officials to determine the public accessibility of PEPFAR program evaluations. These five sites included the Development Experience Clearinghouse (http://dec.usaid.gov/index.cfm), PubMed (http://www.ncbi.nlm.nih.gov/pubmed/), OVCsupport.net (http://www.ovcsupport.net/s/), AIDSTAR-One (http://www.aidstar- one.com/), and Global HIV M&E Info (https://www.globalhivmeinfo.org/Pages/HomePage.aspx). For each of these websites, we conducted searches using keywords that would capture any PEPFAR-related program evaluations or documentation, such as “PEPFAR,” “evaluation,” and “HIV/AIDS.” Where applicable, we then captured the results and counted the number of documents that could reasonably be considered documentation of a PEPFAR program evaluation. We conducted this performance audit from August 2011 to May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Past GAO work has emphasized evaluation as a key source of information to help agency officials and Congress make decisions about the programs they oversee. GAO distinguishes performance measurement—the ongoing monitoring and reporting of program accomplishments—from evaluation, which is defined as individual, systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working. Further, according to GAO guidance, experts external to the program, program managers, or both conduct evaluations to examine the performance of a program within a given context to understand not only whether a program works but also how to improve results. GAO guidance identifies four types of evaluation:  Process evaluation. This type of evaluation assesses the degree to which a program is operating as it was intended. It typically assesses program activities’ conformance to statutory or regulatory requirements, program design, and professional standards or customer expectations.  Outcome evaluation. This type of evaluation assesses the degree to which a program achieves its outcome-oriented objectives. It focuses on outputs and outcomes (including unintended effects) to judge program effectiveness, but may also assess program process to understand how outcomes are produced. Impact evaluation. This is a form of outcome evaluation that assesses the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. Impact evaluation is used when external factors are known to influence the program’s outcomes, in order to isolate the program’s contribution to achievement of its objectives.  Cost-benefit or cost-effectiveness analysis. This type of evaluation compares a program’s outputs or outcomes with the costs to produce them. Cost-effectiveness analysis assesses the cost of meeting a single objective and can be used to identify the least costly alternative for meeting that goal. In addition, GAO guidance provides basic information about the more commonly used evaluation methods; introduces key issues in planning evaluation studies of federal programs to best meet decision makers’ needs; and describes different types of evaluations for answering varied questions about program performance, the process of designing evaluation studies, and key issues to consider in ensuring overall study quality. Further, the guidance recommends standards for evaluation design, including establishing evaluation objectives, identifying constraints, and assessing the appropriateness of the evaluation design. We conducted a statistical analysis of the adequacy of support for findings in evaluations provided to us by CDC and USAID, to determine whether the adequacy of support differed by agency, by methods used, or by type of evaluation. Our analysis indicated that fully supported findings were more likely in CDC’s evaluations than in USAID’s evaluations; in evaluations that used quantitative methods than in evaluations that used qualitative or mixed methods; and in cost-benefit or impact evaluations, as well as outcome evaluations, than in process evaluations. However, while CDC’s evaluations’ findings were more likely to be fully supported than USAID’s evaluations’ findings, the difference was not statistically significant after we accounted for the method used in the evaluations. This lack of statistical significance suggests that the difference was driven partly by the agencies’ choice of evaluation method. Table 6 shows technical details of our statistical analysis of the level of support for findings in CDC and USAID evaluations. In table 6, the chi-square statistics at the base of each of the three panels show that the adequacy of support for findings varied significantly between the two agencies and differed significantly based on the methods used and type of evaluations. The odds ratios in the far-right column show that the odds of evaluations’ being fully supported were 3.6 times greater for CDC than for USAID; 18 times greater for quantitative evaluations than for qualitative or mixed-methods evaluations; 23 times greater for cost-benefit or impact evaluations than for process evaluations; and 3.7 times greater for outcome evaluations than for process evaluations. In addition, we estimated binary logistic regression models to determine whether the difference in adequacy of support for findings in CDC’s and USAID’s evaluations resulted from differences in the methods used or differences in the types of evaluations conducted. Table 7 shows the odds ratios that result from fitting logistic regression models to estimate the effects of the three different factors (agency, methods used, and type of evaluation) on the adequacy of support for findings. Models 1, 2, and 3 are bivariate models, which regress “support” on dummy variables for agency, methods used, and type of evaluation, with each variable considered one at a time. These produce the same odds ratios that we obtained from the observed data in table 6. In contrast, model 4 estimates the effects of agency and methods simultaneously, and model 5 estimates the effects of agency and type of evaluation. In comparing these models, we found that controlling for the methods used (model 4) rendered insignificant the differences between agencies in adequacy of support for findings, whereas controlling for type of evaluation (model 5) did not. In addition to the contact named above, Jim Michels, Assistant Director; Todd M. Anderson; Chad Davenport; David Dornisch; Lorraine Ettaro; Justin Fisher; Brian Hackney; Kay Halpern; Fang He; Reid Lowe; Grace Lui; and Erika Navarro made key contributions to this report. In addition to these staff, the following GAO staff assisted by conducting in-depth assessments of selected evaluations: Sada Aksartova, Gergana Danailova-Trainor, Leah DeWolf, Rachel Girshick, Jordan Holt, Kara Marshall, Jeff Miller, Steven Putansu, Mona Sehgal, and Doug Sloane. Sushmita Srikanth and Katy Crosby assisted with quality assurance reviews. President’s Emergency Plan for AIDS Relief: Program Planning and Reporting. GAO-11-785. Washington, D.C.: July 29, 2011. Global Health: Trends in U.S. Spending for Global HIV/AIDS and Other Health Assistance in Fiscal Years 2001-2008. GAO-11-64. Washington, D.C.: October 8, 2010. President’s Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries’ HIV/AIDS Strategies and Promote Partner Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4, 2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: June 12, 2004. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003.
PEPFAR, reauthorized by Congress in fiscal year 2008, supports HIV/AIDS prevention, treatment, and care overseas. The reauthorizing legislation, as well as other U.S. law and government policy, stresses the importance of evaluation for improving program performance, strengthening accountability, and informing decision making. OGAC leads the PEPFAR effort by providing funding and guidance to implementing agencies, primarily CDC and USAID. Responding to legislative mandates, GAO (1) identified PEPFAR evaluation activities and examined the extent to which evaluation findings, conclusions, and recommendations were supported and (2) examined the extent to which PEPFAR policies and procedures adhere to established general evaluation principles. GAO reviewed these principles as well as agencies’ policies and guidance; surveyed CDC and USAID officials in 31 PEFAR countries and 3 regions; and analyzed evaluations provided by OGAC, CDC, and USAID. The Department of State’s (State) Office of the U.S. Global AIDS Coordinator (OGAC), the Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC), and the U.S. Agency for International Development (USAID) have evaluated a wide variety of President’s Emergency Plan for AIDS Relief (PEPFAR) program activities, demonstrating a clear commitment to evaluation. However, GAO found that the findings, conclusions, and recommendations were not fully supported in many PEPFAR evaluations. Agency officials provided nearly 500 evaluations addressing activities ongoing in fiscal years 2008 through 2010 in all program areas relating to HIV/AIDS treatment, prevention, and care. GAO’s assessment of a selected sample of seven OGAC-managed evaluations found that they generally adhered to common evaluation standards, as did most of a selected sample of 15 evaluations managed by CDC and USAID headquarters. Based on this assessment, GAO determined that these evaluations generally contained fully supported findings, conclusions, and recommendations. However, based on a similar assessment of a randomly selected sample taken from 436 evaluations provided by PEPFAR country and regional teams, GAO estimated that 41 percent contained fully supported findings, conclusions, and recommendations, while 44 percent contained partial support and 15 percent were not supported. State, OGAC, CDC, and USAID have established detailed evaluation policies, as recommended by the American Evaluation Association (AEA). However, PEPFAR does not fully adhere to AEA principles relating to evaluation planning, independence and qualifications of evaluators, and public dissemination of evaluation results. Specifically, OGAC does not require country and regional teams to include evaluation plans in their annual operational plans, limiting its ability to ensure that evaluation resources are appropriately targeted. Further, although OGAC, CDC, and USAID evaluation policies and procedures provide some guidance on how to ensure evaluator independence and qualifications, they do not require documentation of these issues. GAO found that most PEPFAR program evaluations did not fully address whether evaluators had conflicts of interest and some did not include detailed information on the identity and makeup of evaluation teams. Finally, although OGAC, CDC, and USAID use a variety of means to share evaluation findings, not all evaluation reports are available online, limiting their accessibility to the public and their usefulness for PEPFAR decision makers, program managers, and other stakeholders. GAO recommends that State work with CDC and USAID to (1) improve adherence to common evaluation standards, (2) develop PEPFAR evaluation plans, (3) provide guidance for assessing and documenting evaluators’ independence and qualifications, and (4) increase online accessibility of evaluation results. Commenting jointly with HHS’s CDC and USAID, State agreed with these recommendations and noted steps it will take to implement them.
Securing transportation systems and facilities is complicated, requiring balancing security to address potential threats while facilitating the flow of people and goods. These systems and facilities are critical components of the U.S. economy and are necessary for supplying goods throughout the country and supporting international commerce. U.S. transportation systems and facilities move over 30 million tons of freight and provide approximately 1.1 billion passenger trips each day. The Ports of Los Angeles and Long Beach estimate that they alone handle about 43 percent of the nation’s oceangoing cargo. The importance of these systems and facilities also makes them attractive targets to terrorists. These systems and facilities are vulnerable and difficult to secure given their size, easy accessibility, large number of potential targets, and proximity to urban areas. A terrorist attack at these systems and facilities could cause a tremendous loss of life and disruption to our society. An attack would also be costly. According to testimony by a Port of Los Angeles official, a 2002 labor dispute led to a 10-day shutdown of West Coast port operations, costing the nation’s economy an estimated $1.5 billion per day. A terrorist attack to a port facility could have a similar or greater impact. One potential security threat stems from those individuals who work in secure areas of the nation’s transportation system, including seaports, airports, railroad terminals, mass transit stations, and other transportation facilities. It is estimated that about 6 million workers, including longshoreman, mechanics, aviation and railroad employees, truck drivers, and others access secure areas of the nation’s estimated 4,000 transportation facilities each day while performing their jobs. Some of these workers, such as truck drivers, regularly access secure areas at multiple transportation facilities. Ensuring that only workers that do not pose a terrorism security risk are allowed unescorted access to secure areas is important in helping to prevent an attack. According to TSA and transportation industry stakeholders, many individuals that work in secure areas are currently not required to undergo a background check or a stringent identification process in order to access secure areas. In addition, without a standard credential that is recognized across modes of transportation and facilities, many workers must obtain multiple credentials to access each transportation facility they enter. In the aftermath of the September 11, 2001, terrorist attacks, the Aviation and Transportation Security Act (ATSA) was enacted in November 2001. Among other things, ATSA required TSA to work with airport operators to strengthen access control points in secure areas and consider using biometric access control systems to verify the identity of individuals who seek to enter a secure airport area. In response to ATSA, TSA established the TWIC program in December 2001 to mitigate the threat of terrorists and other unauthorized persons from accessing secure areas of the entire transportation network, by creating a common identification credential that could be used by workers in all modes of transportation. In November 2002, the Maritime Transportation Security Act of 2002 (MTSA) was enacted and required the Secretary of Homeland Security to issue a maritime worker identification card that uses biometrics, such as fingerprints, to control access to secure areas of seaports and vessels, among other things. The responsibility for securing the nation’s transportation system and facilities is shared by federal, state, and local governments, as well as the private sector. At the federal government level, TSA, the agency responsible for the security of all modes of transportation, has taken the lead in developing the TWIC program, while the Coast Guard is responsible for developing maritime security regulations and ensuring that maritime facilities and vessels are in compliance with these regulations. As a result, TSA and the Coast Guard are working together to implement TWIC in the maritime sector. Most seaports, airports, mass transit stations, and other transportation systems and facilities in the United States are owned and operated by state and local government authorities and private companies. As a result, certain components of the TWIC program, such as installing card readers, will be the responsibility of these state and local governments and private industry stakeholders. TSA—through a private contractor—tested the TWIC program from August 2004 to June 2005 at 28 transportation facilities around the nation, including 22 port facilities, 2 airports, 1 rail facility, 1 maritime exchange, 1 truck stop, and a U.S. postal service facility. In August 2005, TSA and the testing contractor completed a report summarizing the results of the TWIC testing. TSA also hired an independent contractor to assess the performance of the TWIC testing contractor. Specifically, the independent contractor conducted its assessment from March 2005 to January 2006, and evaluated whether the testing contractor met the requirements of the testing contract. The independent contractor issued its final report on January 25, 2006. Since its creation, the TWIC program has received about $79 million in funding for program development. (See table 1.) The TWIC program is designed to enhance security using several key components (see fig. 1). These include Enrollment: Transportation workers will be enrolled in the TWIC program at enrollment centers by providing personal information, such as a social security number and address, and be photographed and fingerprinted. For those workers who are unable to provide quality fingerprints, TSA is to collect an alternate authentication identifier. Background checks: TSA will conduct background checks on each worker to ensure that individuals do not pose a security threat. These will include several components. First, TSA will conduct a security threat assessment that may include, for example, terrorism databases or terrorism watch lists, such as TSA’s No-fly and selectee lists. Second, a Federal Bureau of Investigation criminal history records check will be conducted to identify if the worker has any disqualifying criminal offenses. Third, workers’ immigration status and mental capacity will be checked. Workers will have the opportunity to appeal the results of the threat assessment or request a waiver in certain limited circumstances. TWIC card production: After TSA determines that a worker has passed the background check, the worker’s information is provided to a federal card production facility where the TWIC card will be personalized for the worker, manufactured, and then sent back to the enrollment center. Card issuance: Transportation workers will be informed when their cards are ready to be picked up at enrollment centers. Once a card has been issued, workers will present their TWIC cards to security officials when they seek to enter a secure area and in the future will enter secure areas through biometric card readers. Since we issued our report on the TWIC program in September 2006, TSA has made progress toward implementing the TWIC program and addressing several of the problems that we previously identified regarding contract oversight and planning and coordination with stakeholders. In January 2007, TSA and the Coast Guard issued a TWIC rule that sets forth the requirements for enrolling workers and issuing TWIC cards to workers in the maritime sector and awarded a $70 million contract for enrolling workers in the TWIC program. TSA is also taking steps designed to address requirements in the SAFE Port Act regarding the TWIC program, such as establishing a rollout schedule for enrolling workers and issuing TWIC cards at ports and conducting a pilot program to test TWIC access control technologies. TSA has also taken steps to strengthen TWIC contract planning and oversight and improve communication and coordination with its maritime stakeholders. Since September 2006, TSA reported that it has added staff with program and contract management expertise to help oversee the TWIC enrollment contract and taken additional steps to help ensure that contract requirements are met. In addition, TSA has also focused on improving communication and coordination with maritime stakeholders, such as developing plans for conducting public outreach and education efforts. On January 25, 2007, TSA and the Coast Guard issued a rule that sets forth the regulatory requirements for enrolling workers and issuing TWIC cards to workers in the maritime sector. Specifically, the TWIC rule provides that workers and merchant mariners requiring unescorted access to secure areas of maritime facilities and vessels must enroll in the TWIC program, undergo a background check, and obtain a TWIC card before such access is granted. In addition, the rule requires owners and operators of maritime facilities and vessels to change their existing access control procedures to ensure that merchant mariners and any other individual seeking unescorted access to a secure area of a facility or vessel has a TWIC. Table 2 describes the specific requirements in the TWIC rule. The TWIC rule does not include the requirements for owners and operators of maritime facilities and vessels to purchase and install TWIC access control technologies, such as biometric TWIC card readers. As a result, the TWIC card will initially serve as a visual identity badge until access control technologies are required to verify the credentials when a worker enters a secure area. According to TSA, during the program’s initial implementation, workers will present their TWIC cards to authorized security personnel, who will compare the cardholder to his or her photo and inspect the card for signs of tampering. In addition, the Coast Guard will verify TWIC cards when conducting vessel and facility inspections and during spot checks using hand-held biometric card readers to ensure that credentials are valid. According to TSA, the requirements for TWIC access control technologies will be set forth in a second proposed rule to be issued in 2008, at which time TSA will solicit public comments and hold public meetings. As part of the TWIC rule, TSA is also taking steps designed to address various requirements of the SAFE Port Act including that it implement TWIC at the 10 highest risk ports by July 1, 2007. According to TSA, the agency has categorized ports based on risk and has developed a schedule for implementing TWIC at these ports to address the deadlines in the SAFE Port Act. In addition, TSA is currently planning to conduct a pilot program at five maritime locations to test TWIC access control technologies, such as biometric card readers, in the maritime environment. According to TSA, the agency is partnering with the ports of Los Angeles and Long Beach to test TWIC access control technologies and plans to select additional ports to participate in the pilot in the near future. TSA and Port of Los Angeles officials told us that ports participating in the pilot will be responsible for paying for the costs of the pilot and plan to use federal port security grant funds for this purpose. According to TSA, the agency plans to begin the pilot in conjunction with the issuance of TWIC cards so the access control technologies can be tested with the cards that are issued to workers. Once the pilot has been completed, TSA plans to use the results in developing its proposed rule on TWIC access control technologies. Following the issuance of the TWIC rule in January 2007, TSA awarded a $70 million contract to a private company to enroll the estimated 770,000 workers required to obtain a TWIC card. According to TSA officials, the contract costs include $14 million for the operations and maintenance of the TWIC identity management system that contains information on workers enrolled in the TWIC program, $53 million for the cost of enrolling workers, and $3 million designated to award the enrollment contractor in the event of excellent performance. TSA officials stated that they are currently transitioning the TWIC systems to the enrollment contractor and testing these systems to ensure that they will function effectively during nationwide implementation. TSA originally planned to begin enrolling workers at the first port by March 26, 2007—the effective date of the TWIC rule. However, according to TSA officials, initial enrollments have been delayed. While TSA officials did not provide specific reasons for the delay, officials from the port where enrollments were to begin told us that software problems were the cause of the delay, and could postpone the first enrollments until May 2007. In addition, TSA and the Coast Guard have not set a date by which workers will be required to posses a TWIC card to access secure areas of maritime facilities and vessels. According to the TWIC rule, once the agency determines at which ports TWIC will be implemented and by what date, this schedule will be posted to the Federal Register. Since we issued our September 2006 report, TSA has taken several steps designed to strengthen contract planning and oversight. We previously reported that TSA experienced problems in planning for and overseeing the contract to test the TWIC program, which contributed to a doubling of TWIC testing contract costs and a failure to test all key components of the TWIC program. We recommended that TSA strengthen contract planning and oversight before awarding a contract to implement the TWIC program. TSA acknowledged these problems and has taken steps to address our recommendations. Specifically, TSA has taken the following steps designed to strengthen contract planning and oversight. Added staff with expertise in technology, acquisitions, and contract and program management to the TWIC program office. Established a TWIC program control office to help oversee contract deliverables and performance. Established monthly performance management reviews and periodic site visits to TWIC enrollment centers to verify performance data reported by the contractor. Required the enrollment contactor to survey customer satisfaction as part of contract performance. In addition to these steps, TSA has established a TWIC quality assurance surveillance plan that is designed to allow TSA to track the enrollment contractor’s performance in comparison to acceptable quality levels. This plan is designed to provide financial incentives for exceeding these quality levels and disincentives, or penalties, if they are not met. According to the plan, the contractor’s performance will be measured against established milestones and performance metrics that the contractor must meet for customer satisfaction, enrollment time, number of failures to enroll, and TWIC help desk response times, among others. TSA plans to monitor the contractor’s performance through monthly performance reviews and by verifying information on performance metrics provided by the contractor. In addition to contract planning and oversight, TSA has also taken steps designed to address problems that were identified in our September 2006 report regarding communication and coordination with maritime stakeholders. We previously reported that stakeholders at all 15 TWIC testing locations that we visited cited poor communication and coordination by TSA during testing of the TWIC program. For example, TSA never provided the final results or report on TWIC testing to stakeholders that participated in the test, and some stakeholders stated that communication from TSA would stop for months at a time during testing. We recommended that TSA closely coordinate with maritime industry stakeholders and establish a communication and coordination plan to capture and address the concerns of stakeholders during implementation. TSA acknowledged that the agency could have better communicated with stakeholders at TWIC testing locations and has reported taking several steps to strengthen communication and coordination since September 2006. For example, TSA officials told us that the agency developed a TWIC communication strategy and plan that describes how the agency will communicate with the owners and operators of maritime facilities and vessels, TWIC applicants, unions, industry associations, Coast Guard Captains of the Port, and other interested parties. In addition, TSA required that the enrollment contractor establish a plan for communicating with stakeholders. TSA, the Coast Guard, and the enrollment contractor have taken additional steps designed to ensure close coordination and communication with the maritime industry. These steps include: Posting frequently asked questions on the TSA and Coast Guard Web-sites. Participating in maritime stakeholder conferences and briefings. Working with Coast Guard Captains of the Ports and the National Maritime Security Advisory Committee to communicate with local stakeholders. Conducting outreach with maritime facility operators and port authorities, including informational bulletins and fliers. Creating a TWIC stakeholder communication committee chaired by TSA, the Coast Guard, and enrollment contractor, with members from 15 maritime industry stakeholder groups. According to TSA, this committee will meet twice per month during the TWIC implementation. Several stakeholders we recently spoke to confirmed that TSA and its enrollment contractor have placed a greater emphasis on communicating and coordinating with stakeholders during implementation and on correcting past problems. For example, an official from the port where TWIC will first be implemented stated that, thus far, communication, coordination, and outreach by TSA and its enrollment contractor have been excellent, and far better than during TWIC testing. In addition, the TWIC enrollment contactor has hired a separate subcontractor to conduct a public outreach campaign to inform and educate the maritime industry and individuals that will be required to obtain a TWIC card about the program. For example, the port official stated that the subcontractor is developing a list of trucking companies that deliver to the port, so information on the TWIC enrollment requirements can be mailed to truck drivers. TSA and maritime industry stakeholders need to address several challenges to ensure that the TWIC program can be implemented successfully. As we reported in September 2006, TSA and its enrollment contractor face the challenge of transitioning from limited testing of the TWIC program to successful implementation of the program on a much larger scale covering 770,000 workers at about 3,500 maritime facilities and 5,300 vessels. Maritime stakeholders we spoke to identified additional challenges to implementing the TWIC program that warrant attention by TSA and its enrollment contractor, including educating workers on the new TWIC requirements, ensuring that enrollments begin in a timely manner, and processing numerous background checks, appeals, and waiver applications. Furthermore, TSA and industry stakeholders also face difficult challenges in ensuring that TWIC access control technologies will work effectively in the maritime environment, be compatible with TWIC cards that will be issued soon, and balance security with the flow of maritime commerce. In September of 2006, we reported that TSA faced the challenge of enrolling and issuing TWIC cards to a significantly larger population of workers in a timely manner than was done during testing of the TWIC program. In testing the TWIC program, TSA enrolled and issued TWIC cards to only about 1,700 workers at 19 facilities, well short of its goal of 75,000. According to TSA and the testing contractor, the lack of volunteers to enroll in the TWIC program testing and technical difficulties in enrolling workers, such as difficulty in obtaining workers’ fingerprints to conduct background checks, led to fewer enrollments than expected. TSA reports that it used the testing experience to make improvements to the enrollment and card issuance process and has taken steps to address the challenges that we previously identified. For example, TSA officials stated that the agency will use a faster and easier method of collecting fingerprints than was used during testing and will enroll workers individually during implementation, as opposed to enrolling in large groups, as was done during testing. In addition, the TWIC enrollment contract Statement of Work requires the contractor to develop an enrollment test and evaluation program to ensure that enrollment systems function as required under the contract. Such a testing program will be valuable to ensure that these systems work effectively prior to full-scale implementation. We also reported that TSA faced the challenge of ensuring that workers are not providing false information and counterfeit identification documents when they enroll in the TWIC program. According to TSA, the TWIC enrollment process to be used during implementation will use document scanning and verification software to help determine if identification documents are fraudulent, and personnel responsible for enrolling workers will be trained to identify fraudulent documents. Since we issued our report in September 2006, we have also identified additional challenges to implementing the TWIC program that warrant attention by TSA and its enrollment contractor. We recently spoke with some maritime stakeholders that participated in TWIC testing and that will be involved in the initial implementation of the program to discuss their views on the challenges of enrolling and issuing TWIC cards to workers. These stakeholders expressed concerns regarding the following issues: Educating workers: TSA and its enrollment contractor face a challenge in identifying all workers that are required to obtain a TWIC card, educating them about how to enroll and receive a TWIC card, and ensuring that they enroll and receive a TWIC card by the deadlines to be established by TSA and the Coast Guard. For example, while longshoremen who work at a port every day may be aware of the new TWIC requirements, truck divers that deliver to the port may be located in different states or countries, and may not be aware of the requirements. Timely enrollments: One stakeholder expressed concern about the challenges the enrollment contactor faces in enrolling workers at his port. For example, at this port, the enrollment contactor has not yet begun to lease space to install enrollment centers—which at this port could be a difficult and time-consuming task due to the shortage of space. Stakeholders we spoke to also suggested that until TSA establishes a deadline for when TWIC cards will be required at ports, workers will likely procrastinate in enrolling, which could make it difficult for the contractor to enroll large populations of workers in a timely manner. Background checks: Some maritime organizations are concerned that many of their workers will be disqualified from receiving a TWIC card by the background check. These stakeholders emphasized the importance of TSA establishing a process to ensure timely appeals and waivers process for the potentially large population of workers that do not pass the check. According to TSA, the agency already has established processes for conducting background checks, appeals, and waivers for other background checks of transportation workers. In addition, TSA officials stated that the agency has established agreements with the Coast Guard to use their administrative law judges for appeal and waiver cases and plans to use these processes for the TWIC background check. In our September 2006 report, we noted that TSA and maritime industry stakeholders faced significant challenges in ensuring that TWIC access control technologies, such as biometric card readers, worked effectively in the maritime sector. Few facilities that participated in TWIC testing used biometric card readers that will be required to read the TWIC cards in the future. As a result, TSA obtained limited information on the operational effectiveness of biometric card readers, particularly when individuals use these readers outdoors in the harsh maritime environment, where they can be affected by dirt, salt, wind, and rain. In addition, TSA did not test the use of biometric card readers on vessels, although they will be required on vessels in the future. Also, industry stakeholders we spoke to were concerned about the costs of implementing and operating TWIC access control systems, linking card readers to their local access control systems, connecting to TSA’s national TWIC database to obtain updated security information on workers, and how biometric card readers would be implemented and used on vessels and how these vessels would communicate with TSA’s national TWIC database remotely. Because of comments regarding TWIC access control technology challenges that TSA received from maritime industry stakeholders on the TWIC proposed rule, TSA decided to exclude all access control requirements from the TWIC rule issued in January 2007. Instead, TSA plans to issue a second proposed rule pertaining to access control requirements in 2008, which will allow more time for maritime stakeholders to comment on the technology requirements and TSA to address the challenges that we and stakeholders identified. Our September 2006 report also highlighted the challenges that TSA and industry stakeholders face in balancing the security benefits of the TWIC program with the impact the program could have on maritime commerce. If implemented effectively, the security benefits of the TWIC program in preventing a terrorist attack could save lives and avoid a costly disruption in maritime commerce. Alternatively, if key components of the TWIC program, such as biometric card readers, do not work effectively, they could slow the daily flow of maritime commerce. For example, if workers or truck drivers have problems with their fingerprint verifications on biometric card readers, they could create long queues delaying other workers or trucks waiting in line to enter secure areas. Such delays could be very costly in terms of time and money to maritime facilities. Some stakeholders we spoke to also expressed concern with applying TWIC access control requirements to small facilities and vessels. For example, smaller vessels could have crews of less than 10 persons, and checking TWIC cards each time a person enters a secure area may not be necessary. TSA acknowledged the potential impact that the TWIC program could have on the flow of maritime commerce and plans to obtain additional public comments on this issue from industry stakeholders and develop solutions to these challenges in the second rulemaking on access control technologies. In our September 2006 report, we recommended that TSA conduct additional testing to ensure that TWIC access control technologies work effectively and that the TWIC program balances the added security of the program with the impact that it could have on the flow of maritime commerce. As required by the SAFE Port act, TSA plans to conduct a pilot program to test TWIC access control technologies in the maritime environment. According to TSA, the pilot will test the performance of biometric card readers at various maritime facilities and on vessels as well as the impact that these access control systems have on facilities and vessel business operations. TSA plans to use the results of this pilot to develop the requirements and procedures for implementing and using TWIC access control technologies in the second rulemaking. Preventing unauthorized persons from entering secure areas of the nation’s ports and other transportation facilities is critical to preventing a terrorist attack. The TWIC program was initiated in December 2001 to mitigate the threat of terrorists accessing secure areas. Since our September 2006 report, TSA has made progress toward implementing the program, including issuing a TWIC rule, taking steps to implement requirements of the SAFE Port Act, and awarding a contract to enroll workers in the program. While TSA plans to begin enrolling workers and issuing TWIC cards in the next few months, it is important that the agency establish clear and reasonable timeframes for implementing TWIC. TSA officials told us that the agency has taken steps to improve contract oversight and communication and coordination with its maritime TWIC stakeholders since September 2006. While the steps that TSA reports taking should help to address the contract planning and oversight problems that we have previously identified and recommendations we have made, the effectiveness of these steps will not be clear until implementation of the TWIC program begins. In addition, significant challenges remain in enrolling about 770,000 persons at about 3,500 facilities in the TWIC program. As a result, it is important that TSA and the enrollment contractor make communication and coordination a priority to ensure that all individuals and organizations affected by the TWIC program are aware of their responsibilities. Further, TSA and industry stakeholders need to address challenges regarding enrollment and TWIC access control technologies to ensure that the program is implemented effectively. It is important that TSA and the enrollment contractor develop a strategy to ensure that any potential problems that these challenges could cause are addressed during TWIC enrollment and card issuance. Finally, it will be critical that TSA ensure that the TWIC access control technology pilot program fully test all aspects of the TWIC program on a full scale in the maritime environment and the results be used to ensure a successful implementation of these technologies in the future. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information on this testimony, please contact Norman J. Rabkin at (202) 512- 8777 or at rabkinn@gao.gov. Individuals making key contributions to this testimony include John Hansen, Chris Currie, Nicholas Larson, and Geoff Hamilton. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Transportation Security Administration (TSA) is developing the Transportation Worker Identification Credential (TWIC) to ensure that only workers that do not pose a terrorist threat are allowed to enter secure areas of the nation's transportation facilities. This testimony is based primarily on GAO's December 2004 and September 2006 reports on the TWIC program and interviews with TSA and port officials conducted in March and April 2007 to obtain updates on the TWIC program. Specifically, this testimony addresses (1) the progress TSA has made since September 2006 in implementing the TWIC program; and (2) some of the remaining challenges that TSA and the maritime industry must overcome to ensure the successful implementation of the TWIC program. Since we issued our report on the TWIC program in September 2006, TSA has made progress toward implementing the TWIC program and addressing several of the problems that we previously identified regarding contract oversight and planning and coordination with stakeholders. Specifically, TSA has issued a TWIC rule that sets forth the requirements for enrolling workers and issuing TWIC cards to workers in the maritime sector; awarded a $70 million dollar contract for enrolling workers in the TWIC program; developed a schedule for enrolling workers and issuing TWIC cards at ports and conducting a pilot program to test TWIC access control technologies; added additional staff with program and contract management expertise to help oversee the TWIC enrollment contract; and developed plans to improve communication and coordination with maritime stakeholders, including plans for conducting public outreach and education efforts. TSA and maritime industry stakeholders still face several challenges to ensuring that the TWIC program can be implemented successfully: (1) TSA and its enrollment contractor need to transition from limited testing of the TWIC program to successful implementation of the program on a much larger scale covering 770,000 workers at about 3,500 maritime facilities and 5,300 vessels. (2) TSA and its enrollment contractor will need to educate workers on the new TWIC requirements, ensure that enrollments begin in a timely manner, and process numerous background checks, appeals, and waivers. (3) TSA and industry stakeholders will need to ensure that TWIC access control technologies will work effectively in the maritime environment, be compatible with TWIC cards that will be issued, and balance security with the flow of maritime commerce. As TSA works to implement the TWIC program and begin enrolling workers, it will be important that the agency establish clear and reasonable time frames and ensure that all aspects of the TWIC program, including the TWIC access control technologies, are fully tested in the maritime environment.
Part of the Mariana Islands Archipelago, the CNMI is a chain of 14 islands in the western Pacific Ocean—just north of Guam and about 3,200 miles west of Hawaii. The CNMI has a total population of 53,890, according to preliminary results of the CNMI’s 2016 Household, Income, and Expenditures Survey. Almost 90 percent of the population (48,200) resided on the island of Saipan, with an additional 6 percent (3,056) on the island of Tinian and 5 percent (2,635) on the island of Rota. Error! No text of specified style in document. for matters relating to foreign affairs and defense affecting the CNMI. The Covenant initially made many federal laws applicable to the CNMI, including laws that provide federal services and financial assistance programs. However, the Covenant preserved the CNMI’s exemption from certain federal laws that had previously been inapplicable to the Trust Territory of the Pacific Islands, including certain federal minimum wage provisions and immigration laws, with certain limited exceptions. Under the terms of the Covenant, the federal government has the right to apply federal law in these exempted areas without the consent of the CNMI government. Section 902 of the Covenant provides that the U.S. and CNMI governments will designate special representatives to meet and consider in good faith issues that affect their relationship and to make a report and recommendations. Error! No text of specified style in document. immigration benefits, that is, the ability to live, and in some cases work, in the CNMI permanently or temporarily. DOI’s Office of Insular Affairs coordinates federal policies and provides technical and financial assistance to the CNMI. The Covenant requires DOI to consult regularly with the CNMI on all matters affecting the relationship between the U.S. government and the islands. In May 2016, President Obama designated the Assistant Secretary for Insular Affairs as the Special Representative for the United States for the 902 Consultations, a process initiated at the request of the Governor of the CNMI to discuss and make recommendations to Congress on immigration and labor matters affecting the growth potential of the CNMI economy, among other topics. The 902 Consultations resulted in a report to the President in January 2017, which we refer to as the 902 Report. DOL requires employers to fully test the labor market for U.S. workers to ensure that U.S. workers are not adversely affected by the hiring of nonimmigrant and immigrant workers, except where not required by law. DOL also provides grants to the CNMI government supporting youth, adult, and dislocated worker programs. From 1999 through 2015, DOL provided such grants under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act of 2014 (WIOA). Error! No text of specified style in document. figure shows, from the lowest point in 2013, the number of employed workers increased by approximately 8 percent by 2015 (from 23,344 to 25,307). However, the number employed in 2015 (25,307) was still approximately 31 percent less than the number employed in 2007 (36,524). Error! No text of specified style in document. workers fell from a peak of almost 38,000 in 2002 (roughly 75 percent of the employed workers) and was under 13,000 in 2015. In contrast, since 2002, the number of domestic workers has fluctuated year to year, ranging from about 10,500 to about 13,500, but increased by 17 percent from 2013 to 2015. In 2007, the minimum wage provisions of the Fair Labor Standards Act of 1938 were applied to the CNMI, requiring the minimum wage in the CNMI to rise incrementally to the federal level in a series of scheduled increases. Under current law, the next minimum wage increase will occur on September 30, 2017, and the CNMI will reach the current U.S. minimum wage on September 30, 2018 (see table 1). Based on our preliminary analysis, we estimate that approximately 62 percent (15,818 of 25,657) of the CNMI’s wage workers in 2014, assuming they maintained employment, would have been directly affected by the federally mandated 2016 wage increase, which raised CNMI’s minimum wage from $6.05 to $6.55 per hour. Since 72 percent of the total foreign workers made less than or equal to $6.55 per hour in 2014, they were more likely to have been directly affected by the 2016 wage increase than domestic workers, with only 41 percent making less than or equal to $6.55. The Consolidated Natural Resources Act of 2008 amended the U.S.– CNMI Covenant to apply federal immigration law to the CNMI, following a transition period. Among other things, the act includes several provisions affecting foreign workers during the transition period. Error! No text of specified style in document. status that allows them to work in the CNMI. Dependents of CW-1 nonimmigrants (spouses and minor children) are eligible for dependent of a CNMI-Only transitional worker (CW-2) status, which derives from and depends on the CW-1 worker’s status. In accordance with the Consolidated Natural Resources Act, DHS, through USCIS, has annually reduced the number of CW-1 permits, and is required to do so until the number reaches zero by the end of a transition period. Since 2011, DHS has annually determined the numerical limitation, terms, and conditions of the CW-1 permits (see table 2). The act was amended in December 2014 to extend the transition period until December 31, 2019, and eliminate the Secretary of Labor’s authority to provide for future extensions of the CW program. Error! No text of specified style in document. Error! No text of specified style in document. associated states, and could allow them to live and work either in the United States and its territories or in the CNMI only. Since 1990, the CNMI’s tourism market has experienced considerable fluctuation, as shown by the total annual number of visitor arrivals (see fig. 2). Total visitor arrivals to the CNMI dropped from a peak of 726,690 in fiscal year 1997 to a low of 338,106 in 2011, a 53 percent decline. Since 2011, however, visitor arrivals have increased by 48 percent, reaching 501,489 in fiscal year 2016. Error! No text of specified style in document. Korean visitors enter the CNMI under the U.S. visa waiver program, Chinese visitors are not eligible and are permitted to be temporarily present in the CNMI under DHS’s discretionary parole authority, according to DHS officials. DHS exercises parole authority to allow, on a case-by-case basis, eligible nationals of China to enter the CNMI temporarily as tourists when there is significant public benefit, according to DHS data. From fiscal year 2011 to 2016 the percentage of travelers that arrived at the Saipan airport and were granted discretionary parole increased from about 20 percent to about 50 percent of the total travelers, according to our analysis of CBP data. Error! No text of specified style in document. If all CW-1 workers, or 45 percent of the total workers in 2015, were removed from the CNMI’s labor market, our preliminary economic analysis projects a 26 to 62 percent reduction in the CNMI’s 2015 GDP, depending on the assumptions made. To estimate the possible effect of a reduction in the number of workers with CW-1 permits in the CNMI to zero—through the scheduled end of the CW program in 2019—we employed an economic method that enabled us to simulate the effect of a reduction under a number of different assumptions. Error! No text of specified style in document. 50 percent likelihood that it would have ranged from $462 million to $583 million, which is 37 to 50 percent lower than the actual value; and 25 percent likelihood that it would have ranged from $353 million to $462 million, which is 50 to 62 percent lower than the actual value (see fig. 3). Across the full range of probable outcomes, the elimination of the CW program would result in a 26 to 62 percent decline in the CNMI’s 2015 GDP, a relatively large negative effect on the economy. Error! No text of specified style in document. The CNMI economy currently is experiencing growing demand for workers, particularly among occupations in construction and hospitality. Since fiscal year 2013, demand for CW-1 permits has doubled, and in fiscal year 2016, demand exceeded the numerical limit (or cap) on approved CW-1 permits set by DHS. Approved CW-1 permits grew from 6,325 in fiscal year 2013 to 13,299 in fiscal year 2016. In 2016, when the cap was set at 12,999, DHS received enough petitions by May 6, 2016, to approve 13,299 CW-1 permits, reaching the cap 5 months prior to the end of the fiscal year. On October 14, 2016, 2 weeks into fiscal year 2017, DHS announced that it had received enough petitions to reach the CW-1 cap and would not accept requests for new fiscal year 2017 permits during the remaining 11 months. In interviews, some employers reported being surprised to learn that the cap had been reached when they sought renewals for existing CW-1 workers. See table 3 for the numerical limit of CW-1 permits and number of permits approved by fiscal year. Error! No text of specified style in document. Legend: – = Not applicable; CNMI = Commonwealth of the Northern Mariana Islands; CW-1 = CNMI- Only transitional worker; DHS = U.S. Department of Homeland Security. Based on DHS data on approved CW-1 permits, by country of birth, occupation, and business, from fiscal years 2014 through 2016, the number of permits approved for Chinese nationals increased, the number of permits approved for construction workers increased, and a large number of CW-1 permits were approved for three new businesses. Chinese nationals. In 2016, DHS approved 4,844 CW-1 permits for Chinese workers, increasing from 1,230 in 2015 and 854 in 2014. This represents a change in the source countries of CW-1 workers, with the percentage of workers from the Philippines declining from 65 to 53 percent during this period, while the share from China rose from 9 to 36 percent (see table 4). Error! No text of specified style in document. Legend: – = Not applicable; CNMI = Commonwealth of the Northern Mariana Islands; CW-1 = CNMI- Only transitional worker. Error! No text of specified style in document. Construction workers. In 2016, DHS approved 3,443 CW-1 permits for construction workers, increasing from 1,105 in 2015 and 194 in 2014 (see table 5). New businesses. In 2016, DHS approved 3,426 CW-1 permits for three construction businesses, representing 26 percent of all approved permits. Two of these businesses had not previously applied for CW-1 permits. The third business was new in 2015 and was granted only 62 CW-1 permits that year. Error! No text of specified style in document. later than 36 months from the date of the license, or by August 2017. See figure 4 for photos showing the initial gaming facility’s development site in Saipan both before and during construction. Error! No text of specified style in document. restriction for such visas. However, China is not listed as an eligible country for H-2 visas. Amid the uncertainty of the future availability of foreign labor, the CNMI government has granted zoning permits to planned projects that will require thousands of additional workers. Twenty-two new development projects, including six new hotels or casinos in Saipan and two new hotels or casinos in Tinian, are planned for construction or renovation by 2019. Beyond the construction demand created by these projects, the CNMI’s Bureau of Environmental and Coastal Quality estimates that at least 8,124 employees will be needed to operate the new hotels and casinos. According to data provided by the bureau, most of this planned labor demand is for development on the island of Tinian, where two businesses plan to build casino resorts, with an estimated labor demand of 6,359 workers for operations—more than twice the island’s population in 2016. According to the Department of Treasury, the existing casino and hotel on Tinian closed in 2015 after having been fined $75 million by the U.S. Error! No text of specified style in document. Department of the Treasury for violations of the Bank Secrecy Act of 1970. One of the two Tinian developments offers overseas immigration services, including assistance with obtaining employment or investment- based immigration to the United States. We observed a billboard advertisement in Tinian with Chinese writing indicating that by investing in a new development in Tinian, an investor’s family members would all get American green cards. This resort development, whose plans estimate a labor force of 859, has undertaken site preparation, while the other larger resort project, whose plans estimate a labor force of 5,500, had not initiated construction as of December 2016. Currently, the CNMI government does not have a planning agency or process to ensure that planned projects are aligned with the CNMI’s available labor force, according to CNMI officials. In January 2017, a bill was introduced in the CNMI Senate to establish an Office of Planning and Development within the Office of the Governor. Our preliminary analysis shows that the current number of unemployed domestic workers in the CNMI is insufficient to replace the existing CW-1 workers or to fill all the nonconstruction jobs that planned development projects are expected to create once their business operations commence. Error! No text of specified style in document. show that the unemployed domestic workforce, estimated at 2,386 in 2016, will be well below the number of workers needed to replace currently employed CW-1 workers in nonconstruction-related occupations. In addition, our preliminary analysis indicates that the unemployed workforce would fall far short of the demand for additional workers in nonconstruction related occupations needed to support the ongoing operations of planned development projects—currently estimated at 8,124 workers by 2019. Error! No text of specified style in document. of Palau). For example, in 2003, 1,909 freely associated state workers were employed in the CNMI as compared with 677 of these workers in 2015, according to CNMI tax data. Moreover, many citizens from the freely associated states migrate to the United States each year, including to nearby Guam. Guam and Hawaii, the closest U.S. areas to the CNMI, both have higher local minimum wages than the CNMI, currently at $8.25 and $9.25 per hour, respectively, according to DOL. Employers in the CNMI are required to attempt to recruit and hire U.S. workers. The CNMI government has a goal that all employers hire at least 30 percent U.S. workers, and employers are generally required to post all job openings to the CNMI Department of Labor’s website. However, the CNMI government can and has granted exemptions to this requirement. From May 8, 2015, to May 27, 2016, seven businesses were granted exemptions, according to data provided by the CNMI Department of Labor. In addition, all employers that apply for CW-1 permits must attest that no qualified U.S. worker is available for the job opening. However, during our ongoing work, some of the CNMI employers with whom we met reported that they face the following challenges in recruiting and retaining U.S. citizens, among others: unsatisfactory results of job postings, high costs of recruiting, and difficulty in retaining U.S. workers. Error! No text of specified style in document. The federal and CNMI governments support programs seeking to address the CNMI’s labor force challenges. These programs include job training funded by employers’ CW-1 vocational education fees that DHS transfers to the CNMI government and employment and training assistance funded by DOL. Our preliminary analysis shows that in recent years, on average, DHS transferred about $1.8 million per year in CW-1 vocational education fees and DOL provided about $1.3 million per year to the CNMI for employment and training programs. DHS collects the $150 vocational education fee assessed for each foreign worker on a CW-1 petition and typically transfers the fees to the CNMI government each month. Results of our ongoing work indicate that to support vocational education curricula and program development in fiscal years 2012 through 2016, DHS transferred to the CNMI Treasury about $9.1 million in CW-1 fees. In fiscal years 2012 through 2016, the CNMI government allocated about $5.8 million of the $9.1 million in CW-1 vocational education fees to three educational institutions (see fig. 5). At present, the CW-1 fees support job training programs at Northern Marianas College and Northern Marianas Trades Institute and in recent years also funded job training provided by CNMI’s Public School System. All three institutions reported using a majority of the CW-1 fees to pay the salaries and benefits of faculty and staff members involved in job training programs. Error! No text of specified style in document. Error! No text of specified style in document. established in 2008—received $1.7 million in CW-1 funding. The institute specializes in training youths and adults in construction, hospitality, and culinary trades. The institute’s senior officers told us that in fiscal year 2016, 300 students were enrolled in the institute’s fall, spring, and summer sessions, and as of November 2016, 132 of these students had found employment after completing their training. CNMI’s Public School System. In fiscal years 2012 through 2015, the Public School System—which consists of 20 public schools, including 5 high schools that graduated 662 students in the 2014– 2015 school year—received $2 million in CW-1 funds for its cooperative education program designed to prepare high school students for the CNMI’s job market. By the end of the 2014–2015 school year, 452 students were enrolled in the cooperative education program, according to the federal programs officer for the Public School System. As part of our ongoing work, we facilitated group discussions with current and former students of the CW-1-funded programs at each of the three institutions. Several participants told us that the training had helped them find jobs. Participants also identified specific benefits of the training they received, such as increased familiarity with occupations they intended to enter, learning communication skills tailored for specific work environments, and maintaining and improving skills in a chosen career path. However, the employers we interviewed in the CNMI told us that the benefits of the job training programs supported by the CW-1 vocational education fees were limited to Saipan and that programs run by Northern Marianas College and Northern Marianas Trades Institute were unavailable on Tinian and Rota. Error! No text of specified style in document. Preliminary results of our ongoing work show that from July 2012 through June 2016, DOL provided about $5.3 million in grants under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act of 2014 (WIOA) to the CNMI Department of Labor’s Workforce Investment Agency for job search assistance, career counseling, and training. That agency carried out WIA programs in the CNMI and now administers programs under WIOA. DOL’s Employment and Training Administration conducts federal oversight of these programs. Providers of DOL-funded worker training include Northern Marianas College, Northern Marianas Trades Institute, CNMI government agencies, and private businesses. Examples of training provided by these entities include courses toward certification as a phlebotomy technician, a nursing assistant, and a medical billing and coding specialist. The CNMI developed a state plan outlining a 4-year workforce development strategy under WIOA and submitted its first plan by April 1, 2016. The plan and the WIOA performance measures took effect in July 2016. According to its state plan, the CNMI Department of Labor has formed a task force to assess approaches for using workforce programs to prepare CNMI residents for jobs that will be available because of ongoing reductions in the number of foreign workers and the eventual expiration of the CW program. Error! No text of specified style in document. In December 2016, after 8 months of official 902 Consultations, informal discussions, and site visits to locations in the CNMI, the Special Representatives of the United States and the CNMI transmitted a report to the President that included six recommendations agreed to by the Special Representatives on immigration and labor matters: 1. Extending the CW program beyond 2019 and other amendments, such as raising the CW-1 cap and restoring the executive branch’s authority to extend the CW program. 2. Providing permanent status for long-term guest workers. 3. Soliciting input on suggested regulatory changes to the CW program. 4. Considering immigration policies to address regional labor shortages. 5. Extending eligibility to the CNMI for additional federal workforce development programs. 6. Establishing a cooperative working relationship between DHS and the CNMI. Table 6 lists these six recommendations and summarizes proposed next steps toward implementing them that could be taken, according to the report. Chairman Murkowski, Ranking Member Cantwell, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this statement, please contact David Gootnick, Director, International Affairs and Trade at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Emil Friberg (Assistant Director), Julia Ann Roberts (Analyst-in-Charge), Sada Aksartova, David Blanding, Benjamin Bolitzer, David Dayton, and Moon Parks. Technical support was provided by Neil Doherty, Mary Moutsos, and Alexander Welsh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2008, Public Law 110-229 established federal control of CNMI immigration. It required DHS to create a transitional work permit program for foreign workers in the CNMI and to decrease the number of permits issued annually; it presently requires that DHS reduce them to zero by December 31, 2019. To implement this aspect of the law, in 2011, DHS created a CW-1 permit program for foreign workers. In 2015, foreign workers totaled 12,784, making up more than half of the CNMI workforce. GAO was asked to review the implementation of federal immigration laws in the CNMI. This testimony discusses GAO's preliminary observations from its ongoing work on (1) the potential economic impact of reducing the number of CNMI foreign workers to zero and (2) federal and CNMI efforts to address labor force challenges. GAO reviewed U.S. laws and regulations; analyzed government data, including CNMI tax records since 2001; and conducted fieldwork in Saipan, Tinian, and Rota, CNMI. During fieldwork, GAO conducted semistructured interviews and discussion groups with businesses, CW-1 workers, U.S. workers, and current and former job training participants. GAO also interviewed officials from the CNMI government, DHS, and the U.S. Departments of Commerce, the Interior, and Labor. If all foreign workers in the Commonwealth of the Northern Mariana Islands (CNMI) with CNMI-Only transitional worker (CW-1) permits, or 45 percent of total workers in 2015, were removed from the CNMI's labor market, GAO's preliminary economic analysis projects a 26 to 62 percent reduction in CNMI's 2015 gross domestic product (GDP)—the most recent GDP available. In addition, demand for foreign workers in the CNMI exceeded the available number of CW-1 permits in 2016—many approved for workers from China and workers in construction occupations. The construction of a new casino in Saipan is a key factor in this demand (see photos taken both before and during construction in 2016). Meanwhile, by 2019, plans for additional hotels, casinos, and other projects estimate needing thousands of new employees. When the CW-1 permit program ends in 2019, GAO's preliminary analysis of available data shows that the unemployed domestic workforce, estimated at 2,386 in 2016, will be well below the CNMI's expected demand for labor. To meet this demand, CNMI employers may need to recruit U.S.-eligible workers from the U.S. states, U.S. territories, and the freely associated states (the Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). Federal and CNMI efforts to address labor force challenges include (1) job training programs and (2) employment assistance funded by the U.S. Department of Labor and implemented by the CNMI's Department of Labor. The Department of Homeland Security (DHS) collects the $150 vocational education fee assessed for each foreign worker on a CW-1 petition and transfers the fees to the CNMI government. Results of GAO's ongoing work indicate that to support vocational education curricula and program development in fiscal years 2012 through 2016, DHS transferred to the CNMI Treasury about $9.1 million in CW-1 fees. During this period, GAO's preliminary analysis shows that the CNMI government allocated about $5.8 million of the $9.1 million to three educational institutions: Northern Marianas College, Northern Marianas Trades Institute, and the CNMI's Public School System. In 2016, a U.S.–CNMI consultative process resulted in a report to Congress with six recommendations related to the CNMI economy, including one to raise the cap on CW-1 foreign worker permits and extend the permit program beyond 2019. GAO is not making any recommendations at this time. GAO plans to issue a final report in May 2017.
DOE’s contractors operate a number of facilities that are used to produce nuclear materials and design, test, assemble, and disassemble nuclear weapons. In the operation of these facilities, contractor employees may handle materials, documents, and information that are classified. An employee working in such an environment is investigated and granted a security clearance if one is warranted. To ensure that personnel with access to classified information do not compromise national defense and security, DOE’s operations offices may suspend security clearances. A clearance may be suspended as a result of an employee’s use of illegal drugs, alcohol abuse, mental illness, falsification of information on security statements, sabotage or treason, membership in an organization that advocates the overthrow of the government or association with people who are members of such organizations, failure to protect classified data, unusual conduct or dishonesty, and having relatives living in a country whose interests are hostile to those of the United States. Information leading to the suspension of an employee’s clearance can come from many sources, including routine security reinvestigations, random drug testing, and allegations from other people. If DOE believes that national security could potentially be compromised, it begins a multilayered review process that can result in the suspension—and ultimately revocation—of an employee’s security clearance. More than a year may pass before DOE makes a final determination. The employee is entitled to a formal hearing by a hearing officer and attorneys, a review of the hearing transcript by a personnel security review examiner, and a final resolution by the Security Affairs Director. DOE may also have an employee undergo a psychiatric evaluation to examine the employee’s judgment or reliability if information reveals mental illness, alcohol abuse, or drug use. The facilities operated by DOE’s Albuquerque, Savannah River, and Oak Ridge operations offices employ the Department’s largest numbers of employees holding clearances—more than 84,000. These three offices oversee six major contractors: AT&T/Sandia Corporation (Sandia National Laboratories) and the University of California (Los Alamos National Laboratory) at the Albuquerque Operations Office in New Mexico; Westinghouse and Bechtel companies at the Savannah River Operations Office in South Carolina; and Martin Marietta Energy Systems, Incorporated, and M. K. Ferguson of Oak Ridge Company at the Oak Ridge Operations Office in Tennessee. At the locations included in our review, in various 1-year periods during fiscal year 1989 through fiscal year 1993, contractor employees from several minority groups had their security clearances suspended more often than would be expected statistically when they were compared with the majority population of the workforce. The population of contractor employees includes Asians, American Indians, African-Americans, Hispanics, and whites. Table 1 shows the number of years during this period in which a statistical disparity occurred in the number of clearances suspended for the employee population groups at the three sites. During the period covered by our review, AT&T/Sandia Corporation operated the Sandia National Laboratories and the University of California operated the Los Alamos National Laboratory for DOE’s Albuquerque Operations Office. These two contractors combined employ more than 15,000 people with security clearances. DOE suspended the security clearances of 98 contractor employees at Sandia and Los Alamos during fiscal year 1989 through fiscal year 1993. The number of clearances suspended for Hispanics was statistically disparate in fiscal years 1992 and 1993; the number for American Indians was statistically disparate in fiscal year 1992. Two other racial/ethnic minority groups were represented at Sandia and Los Alamos: Asians and African-Americans. However, no Asians had their clearances suspended in this period, and the number of African-Americans whose clearances were suspended did not show a statistically significant disparity. More specifically, in fiscal year 1992 American Indians and Hispanics made up about 2 percent and about 23 percent, respectively, of the total population of employees at Sandia and Los Alamos. However, 12 percent (4 of 33) of the suspensions involved American Indians, and 42 percent (14 of 33) involved Hispanics. In fiscal year 1993, Hispanics made up about 23 percent of the total employee population at Sandia and Los Alamos but accounted for 47 percent (14 of 30) of the number of security clearances suspended. The disparities for these groups in these years were all significant, according to the Fisher’s Exact Test. (See app. II for data on contractor employees at the Sandia and Los Alamos national laboratories.) DOE’s Savannah River facility is operated by the Westinghouse Company for DOE’s Savannah River Operations Office. The major construction contractor is the Bechtel Company. About 20,000 employees of Westinghouse and Bechtel work at the Savannah River Site. About 17,000 of those employees have security clearances. DOE suspended the security clearances of 163 contractor employees at the Savannah River Site during calendar years 1989 through 1993. The number of clearances suspended was statistically disparate for one group, African-Americans, in 3 of the 5 years: 1991, 1992, and 1993. African-Americans made up about 20 percent of the total number of employees holding clearances throughout this period. In calendar year 1991, 40 percent (10 of 25) of those whose clearances were suspended were African-American. African-Americans accounted for about 48 percent (27 of 56) of the clearances suspended in calendar year 1992 and about 36 percent (14 of 39) in calendar year 1993. The disparities for African-Americans in calendar years 1991, 1992, and 1993 were all significant, according to the Fisher’s Exact Test. The population of contractor employees at this site also includes Asians, American Indians, and Hispanics. American Indians and Hispanics did not have their clearances suspended in this period. The number of Asians whose clearances were suspended did not show a statistically significant disparity. (See app. III for data on the contractor employees at the Savannah River Site.) The contractors we reviewed at DOE’s Oak Ridge facilities—Martin Marietta Energy Systems and M. K. Ferguson of Oak Ridge Company—employ about 21,000 people. Over 10,000 of those employees have security clearances. DOE suspended the security clearances of 164 of the contractor employees at its Oak Ridge facilities in fiscal years 1989 through 1993—the largest number of suspensions at the locations we reviewed. For one group, African-Americans, a statistically disparate number of clearances were suspended in 3 of the 5 fiscal years: 1989, 1992, and 1993. African-Americans at Oak Ridge made up between 8 and 10 percent of the workforce holding clearances in the years we reviewed. Although African-Americans represented a small portion of the total population holding clearances, in fiscal year 1989 about 44 percent (14 of 32) of those whose clearances were suspended were African-American. In fiscal year 1992, African-Americans made up 26 percent (13 of 50) of the population whose clearances were suspended; in fiscal year 1993, they made up 22 percent (7 of 32). A statistically disparate number of Hispanics also had their clearances suspended in fiscal year 1990. Specifically, Hispanics represented about 0.2 percent of the workforce in fiscal year 1990. However, about 6 percent (1 of 17) of those whose clearances were suspended were Hispanic. The disparities for African-Americans in fiscal years 1989, 1992, and 1993 and for Hispanics in fiscal year 1990 were significant, according to the Fisher’s Exact Test. (See app. IV for data on contractor employees at DOE’s Oak Ridge facilities.) Oak Ridge’s population of contractor employees also includes Asians and American Indians. However, no Asians or American Indians had their clearances suspended during the period covered by our review. Under federal equal employment opportunity policy, federal agencies and their contractors are not required to monitor the suspension of the security clearances for racial/ethnic minority groups. Because DOE is not required to do so, no organization in the Department collects information on the suspension of clearances by racial or ethnic group, and DOE was not aware of the statistical disparities discussed in this report. Executive Order 11246, entitled “Equal Employment Opportunity,” states that federal contractors will not discriminate against any employee or applicant for employment because of several factors, including race. To help in assessing compliance with the policy on equal employment opportunity, reports that federal agencies receive from contractors list employees by race and ethnicity. DOE further requires contractors to provide data on hirings, promotions, layoffs, and terminations. But DOE’s orders on equal employment opportunity do not require the contractors to document or track the suspension of security clearances for various population subgroups. Executive Order 11246 does not specifically discuss discrimination in security clearance matters and does not require personnel actions on security clearances taken by federal agencies or their contractors to be monitored. Within DOE, the Office of Safeguards and Security is responsible for establishing policies and procedures for security clearances for personnel. The Office bases its decisions to continue or suspend security clearances on 10 C.F.R. 710, “Criteria and Procedures for Determining Eligibility for Access to Classified Matter or Significant Quantities of Special Nuclear Material.” DOE Order 5631.2C, “Personnel Security Program,” implements this regulation. According to an official in the Office of Safeguards and Security, because race and ethnicity are not factors in the processes used for continuing or suspending security clearances, such information is not requested or gathered as part of the processes. DOE’s Office of Contractor Human Resource Management maintains data on the race and ethnicity of contractor employees but did not gather data on the suspensions of security clearances for the employees. DOE has two orders that apply to equal employment opportunity and affirmative action at the facilities operated by contractors. DOE Order 3220.4A, “Contractor Personnel and Industrial Relations Reports,” requires that the contractors provide data on employment—such as hirings, separations, and promotions—by race and ethnicity so that DOE can evaluate the contractors’ performance in human resource management. However, the order does not require contractors to provide data on suspensions of security clearances in terms of equal employment opportunity. DOE Order 3220.2A, “Equal Opportunity in Operating and Onsite Service Contractor Facilities,” implements DOE’s policy that there will be no discrimination at contractors’ facilities because of race and that affirmative action will be taken to fully realize equal opportunity. The order details the responsibilities and authorities of the various offices responsible for equal employment opportunity and affirmative action. However, these responsibilities do not include tracking or analyzing the suspension of security clearances by race or ethnicity. DOE was not aware of the statistical disparities that our analysis revealed because it had not combined the data on security clearances—available at security offices—with the data on race and ethnicity—available at other offices. DOE’s Office of Safeguards and Security and the site security offices had information about suspensions of clearances but did not have information on race and ethnicity because they were not required to have that information for granting or continuing security clearances. DOE’s Office of Economic Impact and Diversity, which includes the offices of Civil Rights and Contractor Human Resource Management, had data on race and ethnicity but had no information on the suspension of security clearances. As previously noted, that office was not required to collect such data. DOE has not been tracking the suspension of clearances by racial/ethnic group. As a result of our analysis, DOE is now aware that contractor employees who are members of racial/ethnic minority groups were more likely than white employees to have their security clearances suspended in some of the years and locations we reviewed. It is important that DOE look into the reason for the statistical disparities to assure itself that discrimination is not occurring. We recommend that the Secretary of Energy investigate the reasons for the disparities in the number of security clearances suspended for contractor employees in the locations and years identified by our review and take action to correct any problems that this investigation identifies in the Department’s security clearance procedures and require that data on the racial and ethnic background of contractor employees whose clearances are suspended at all locations be compiled, monitored, and reviewed to identify any statistical disparities in the number of clearances suspended for minorities, and investigate and take appropriate corrective action if such disparities occur. As requested, we did not obtain written agency comments on a draft of this report. However, we discussed the information in this report with officials in DOE’s Office of Nonproliferation and National Security and with officials from the Albuquerque, Oak Ridge, and Savannah River operations offices. These officials agreed with the facts contained in the report. However, they expressed concern about the statistical methodology we used to analyze the data on suspended clearances. They said that our analysis was not sufficiently sophisticated to include a variety of demographic factors, such as age or job category, which could explain the statistical disparities we found. They concluded that our “one-faceted” approach to the demographic issue, combined with the very small number of clearances suspended, “renders the reasoning behind any finding of statistical disparity questionable . . . .” In this report, we have not attempted to determine why statistical disparities are occurring. We are only reporting that, according to the Fisher’s Exact Test, statistical disparities are occurring at all the locations included in our review—that is, more security clearances are being suspended for minorities than would be expected if suspensions occurred in a purely random fashion. We believe DOE needs to determine why these statistical disparities are occurring. In making this determination, DOE may need to conduct more sophisticated demographic studies of its workforce. Until such studies are completed, DOE cannot know why the security clearances of minority employees are being suspended more often than would be expected statistically. We also discussed the contents of this report with officials from DOE’s Office of Economic Impact and Diversity. These officials also agreed with the facts contained in the report. In addition, they said that the findings “serve as a basis for further review of the method utilized for suspending security clearances. . . .” We conducted this review at DOE headquarters and the Albuquerque, Savannah River, and Oak Ridge operations offices between June 1993 and August 1994 in accordance with generally accepted government auditing standards. We reviewed DOE’s records, applicable orders, and special program initiatives; interviewed DOE program officials and contractors; and merged data on security clearances with personnel information to analyze the data for statistical disparities in the number of clearances suspended. (See app. I for a more detailed discussion of our scope and methodology.) As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy; the Director, Office of Management and Budget; interested congressional committees; and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix V. To address the questions of the Chairman, House Committee on Government Operations, we had discussions on the suspension of security clearances with DOE officials in the Office of Safeguards and Security and Office of Civil Rights at the Department’s headquarters and operations offices at Albuquerque, Savannah River, and Oak Ridge. We also obtained data on such suspensions from these officials. In addition, we discussed suspensions with contractors at the Sandia and Los Alamos national laboratories, Savannah River Site, and Oak Ridge. The Albuquerque, Savannah River, and Oak Ridge operations offices, which administer these sites, are responsible for 54 percent of the Department’s total population of contractor employees holding security clearances. We also interviewed the Deputy Director of the Department of Labor’s Office of Federal Contract Compliance Programs and examined the executive order and federal regulations on contractors’ compliance programs for equal opportunity employment. In addition, we obtained data on ethnicity, sex, and total annual employment for contractor employees at the locations included in our review and reviewed a random sample of personnel security files to determine what data on ethnicity and sex were collected and recorded. In our analysis of suspensions, we used data provided by DOE on the populations whose clearances had been suspended and on the total populations within each racial/ethnic group at each location. We used the Fisher’s Exact Test to (1) compare the proportion of each racial/ethnic group whose clearances had been suspended with the proportion of whites whose clearances had been suspended and (2) calculate the probability that the number of minorities whose clearances were suspended would have occurred had the suspensions been randomly distributed across the racial/ethnic groups. Analysis using the Fisher’s Exact Test shows whether the occurrences can be explained by chance or may have been caused by some other factor. Our use of the Fisher’s Exact Test had a confidence level of 95 percent, which means that some of the results (about 5 percent) that were found to be statistically significant could be due to chance alone. The Fisher’s Exact Test applies to all situations and is not affected by the size of the sample. As a result, the test is commonly used when the number of events being analyzed is small. A significant result from this test does not conclusively demonstrate that discrimination has occurred; rather, it shows that the result differs significantly from what would be expected if race/ethnicity was not related to the suspension of a clearance. William R. Mowbray, Statistician The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) security clearance program, focusing on: (1) whether racial or ethnic disparities existed among those employees who had their security clearances suspended; and (2) actions DOE needs to take to respond to these disparities. GAO found that: (1) between fiscal years 1989 and 1993, DOE suspended 425 security clearances for contractor employees; (2) African-American, Hispanic, and American Indian contractor employees had their security clearances suspended more often than would be statistically expected when compared with the majority of the workforce; (3) DOE was not aware of the statistical disparities because it did not monitor or track the suspension of clearances by racial/ethnic group; and (4) DOE needs to further evaluate why disparities in security clearances are occurring to ensure that discrimination is not occurring.
Mexico is the primary transit country for cocaine entering the United States from South America as well as a major source country for heroin, marijuana and, more recently, methamphetamine. U.S. law enforcement efforts in the southeastern United States and the Caribbean during the mid-1980s caused cocaine traffickers to expand routes to the drug markets in the United States. The traffickers’ preferred routes were through Mexico, a country with a 2,000-mile border with the United States, a 30-year history of heroin and marijuana smuggling, and the existence of cross-border family ties. The Drug Enforcement Administration (DEA) estimates that up to 70 percent of the cocaine entering the United States currently transits Mexico. Since 1977, we have issued four reports that examined various aspects of U.S. and Mexican efforts to control drug production and trafficking. Many of the problems discussed in those reports continue to adversely affect current drug control efforts in Mexico. In our June 1995 testimony on U.S. efforts to stop the flow of drugs from cocaine producing and transit countries, we highlighted problems in such areas as changes in the U.S. drug interdiction strategy; competing foreign policy objectives at some U.S. embassies; coordination of U.S. activities; management and oversight of U.S. assets; and willingness and ability of foreign governments to combat the drug trade. This report updates our prior work on drug control efforts in Mexico. The importance of Mexico to U.S. drug control efforts is best described by the Department of State, which reported in March 1996 that “. . . no country in the world poses a more immediate narcotics threat to the United States than Mexico.” This view was reiterated by the Administrator of DEA, who testified in August 1995 that Mexico was “. . . pivotal to the success of any U.S. drug strategy.” It is estimated that up to 70 percent of the more than 300 tons of cocaine that entered the United States in 1994 transited Mexico. DEA estimates that, at any one time, from 70 to 100 tons of cocaine are stockpiled in Mexico for movement into the United States. In its March 1996 International Narcotics Control Strategy Report, the Department of State estimated that Mexico supplies 20 to 30 percent of the heroin, which is the predominate form of heroin available in the western half of the United States, and up to 80 percent of the foreign-grown marijuana consumed in the United States. Mexican drug-trafficking organizations also dominate the U.S. methamphetamine trade and are major figures in the diversion of precursor chemicals necessary for the manufacture of methamphetamine. Narcotics traffickers use a variety of air, land, and sea conveyances and routes to move cocaine from Colombia (the world’s largest manufacturer) to Mexico. Cocaine shipments are then moved overland through Mexico and across the U.S.-Mexican border. (See fig. 1.) Since the early 1990s, some traffickers have begun to use jet cargo aircraft that are larger and faster than the private aircraft used in the late 1980s. As we recently reported, traffickers in the Caribbean have changed their primary means of delivery and are increasingly using commercial and noncommercial maritime vessels (such as go-fast boats, sailing and fishing vessels, and containerized cargo ships) to transport drugs through the transit zone. According to officials at the U.S. Embassy in Mexico City, about two-thirds of the cocaine currently entering Mexico is transported by maritime means. Department of Defense (DOD) records show that the number of known drug-trafficking events involving aircraft in the transit zone declined by about 65 percent from 1992 to 1995 and that known maritime drug-trafficking events increased by about 40 percent from 1993 to 1995. The U.S. Embassy in Mexico City reported that 15 known air-trafficking events were detected in Mexico during 1995. Drug traffickers respond quickly to interdiction operations by adjusting their delivery routes and means of transport. Some traffickers have begun to use aircraft not ordinarily associated with cocaine movement, such as commercial jets and air cargo aircraft, and maritime vessels to move drugs into Mexico. Traditionally, traffickers have relied on twin-engine general aviation aircraft to deliver cocaine shipments that ranged from 800 to 1,000 kilograms. Beginning in 1994, however, some trafficking groups began using larger Boeing 727-type jet aircraft that could fly faster than U.S. and Mexican detection and monitoring aircraft and deliver up to 10 metric tons of cocaine per trip. To date, there have been eight known deliveries using this means of transport. During the past 3 years, Mexican trafficking organizations operating on both sides of the border have replaced U.S.-based outlaw motorcycle gangs as the predominant methamphetamine manufacturers and traffickers in the United States. DEA estimates that up to 80 percent of the methamphetamine available in the United States is either produced in Mexico and transported to the United States or manufactured in the United States by Mexican traffickers. Methamphetamine seizures in Mexico have grown from a negligible amount in 1992 to 495 kilograms in 1995. Also, the amount of methamphetamine seized along the border rose from 6.5 kilograms in 1992 to 665 kilograms in 1995. Unlike cocaine, Mexican drug-trafficking organizations control the production and distribution of methamphetamine and, because they have complete control, they retain 100 percent of the profits. In recent years, drug-trafficking organizations in Mexico have become more powerful as they have expanded their operations to include not only the manufacture and distribution of methamphetamine but also the trafficking and distribution of cocaine in the United States. Initially, Mexican drug-trafficking organizations acted as transportation agents for Colombian organizations and only smuggled cocaine across the U.S. border. As they became the key transporters for the Colombians, the Mexicans began to demand and receive a portion of the drug shipment for their services. According to DEA, Mexican drug-trafficking groups often receive up to half of a cocaine shipment for their services. This has resulted in Mexican drug-trafficking groups substantially increasing their profits and gaining a foothold in the lucrative cocaine wholesale business in the United States. According to DEA, Mexican drug traffickers have used their vast wealth to corrupt police and judicial officials as well as project their influence into the political sector. The Administrator of DEA recently testified that some of Mexico’s major drug-trafficking organizations have the potential of becoming as powerful as their Colombian counterparts. Proximity to the United States, endemic corruption, and little or no regulation have combined to make Mexico a money-laundering haven for the initial placement of drug profits into the world’s financial system. Once placed in the Mexican financial system, funds can be transferred by wire to virtually anywhere in the world. Mexico is also the most important transit point for bulk money shipments from the United States to the drug-trafficking organizations in Mexico and Colombia. Mexican officials estimated that billions of dollars in drug proceeds were repatriated by Mexican drug-trafficking organizations in 1994, and the total amount moved into Mexico for eventual repatriation to Colombia was much higher. Mexico eradicated substantial amounts of marijuana and opium poppy crops in 1995 but other counternarcotics activities, including cocaine seizures and arrests of traffickers, have declined since 1992. Mexico’s efforts to stop the flow of drugs have been limited by numerous problems. These problems include widespread, endemic corruption; economic and political difficulties encountered by the government of Mexico; the absence of some legislation necessary to provide a complete foundation for a meaningful counternarcotics effort; and inadequate equipment and training that limit Mexico’s capabilities to detect and interdict drugs and arrest drug traffickers. In January 1993, the government of Mexico initiated a policy to conduct its own counternarcotics activities, assumed most of the costs of the counternarcotics effort and refused most forms of U.S. drug-control assistance. This policy, commonly known as the “Mexicanization” of the drug effort, has resulted in major reductions in the U.S. counternarcotics assistance program in Mexico. During this period, Mexico has seized only about half as much cocaine and made only about a third as many drug-related arrests. Despite Mexico’s counternarcotics efforts, the amount of cocaine seized and the number of drug-related arrests in Mexico have declined from 1993 to 1995 compared to those before U.S. assistance was curtailed. The average annual amount of cocaine seized in Mexico from 1990 to 1992 was more than 45 metric tons, including more than 50 metric tons in 1991. In contrast, from 1993 to 1995, average cocaine seizures declined to about 30 metric tons annually, including about 22 metric tons in both 1994 and 1995. The number of drug-related arrests in Mexico in 1992 was about 27,600 persons whereas, by 1995, the number had fallen to about 9,900—a decline of nearly two-thirds. In commenting on this report, the Department of State attributed the decline in the number of arrests to a change in emphasis that focused on arresting major drug traffickers. For example, in January 1996, Mexico arrested Juan Garcia-Abrego, reputed leader of one of Mexico’s drug cartels, and expelled him to the United States for prosecution. Mexico has made some efforts in counternarcotics. For example, Mexican military personnel have increased their participation in combating illicit drugs and destroying illegal airfields. The Mexican Army has traditionally been involved in the manual eradication of illicit drug crops. During 1995, the Mexican government reported that more than 7,000 soldiers worked full time on drug eradication programs and, during peak growing seasons, the number of soldiers working on these programs grew to 11,000. Army personnel are assigned to remote growing areas for short-term (90-day) tours during which they manually cut down, uproot, and burn opium poppy and marijuana plants and patrol rural areas to halt the transportation of these and other illicit drugs. According to the Department of State, Mexican personnel effectively eradicated 29,000 acres of marijuana and almost 21,000 acres of opium poppy during 1995. As a further indication of increasing the role of the military, President Zedillo directed the Mexican Air Force to use its F-5 fighter aircraft to assist the Attorney General’s Office in air interdiction efforts in 1995. However, assigning the aircraft to an interdiction mission may not have an immediate impact because, according to U.S. officials, deficiencies in the capabilities and maintenance of the F-5s, as well as poorly trained pilots and mechanics, limit the effectiveness and possibilities of success of the Mexican Air Force in this new mission. The Department of State reports that pervasive corruption continues to seriously undermine counternarcotics efforts in Mexico. In addition, the Administrator of DEA testified in March 1996 that Mexican drug-trafficking organizations have become so wealthy and powerful that they can rival legitimate governments for influence and control. While drug-related corruption exists on both sides of the border, the Department of Justice believes that it is more prevalent in Mexico than in the United States. After taking office in late 1994, Mexican President Zedillo directed the Mexican military—widely perceived to be the least corrupt government institution—to expand its involvement in attempting to stop narcotics-related corruption. Following an investigation that revealed extensive corruption within the Mexican federal judicial police forces in the state of Chihuahua, a contingent of Mexican Army officers and a number of civilian personnel employed by the Mexican military were reassigned to replace 60 judicial police personnel in December 1995. According to Mexican officials, the deployment of Army personnel is not a short-term quick fix but, rather, a commitment to remain in Chihuahua until rampant police corruption is brought under control. Despite the efforts that President Zedillo has undertaken since late 1994, U.S. and Mexican officials told us that corruption in Mexico is still widespread within the government and the private sector. They added that corruption can be found within many government agencies, but it is especially prevalent within law enforcement organizations, including the Mexican federal judicial police and other police forces. Mexican federal and state police personnel have reportedly participated in the movement of drugs, including one instance in November 1995 in which federal and state personnel off-loaded a cargo jet laden with from 6 to 10 metric tons of cocaine. In another instance, 34 federal judicial police personnel were arrested by the Mexican Army in June 1995 when they were found to be protecting a major drug trafficker. Another example occurred in March 1995 when 16 officers of the National Institute for Combatting Drugs (the Mexican equivalent of DEA) were arrested for accepting cocaine and cash to allow a 1.2-metric ton shipment of cocaine to proceed. Drug-related corruption is not limited to federal police personnel. As we indicated in our June 1995 testimony, many local police officers are susceptible to corruption because they earn very low salaries. Sometimes, their salaries are equivalent to only about $3 per day, which is not enough to provide many of their families’ basic needs. More recent reports indicate that the take-home pay of a foot patrolman in Mexico City is about $6 per day—an increase since June 1995, but still much too low to reduce susceptibility to corruption. President Zedillo has openly acknowledged the problems created by corruption, publicly stated his commitment to stopping it, and taken some actions to reduce it. Within the Office of the Attorney General, these actions include restructuring the Office to facilitate counternarcotics efforts, increasing the amounts of staff and equipment, and undertaking extensive training programs. Within the Ministry of Finance, a separate Money Laundering Directorate was created to enhance the government’s investigative capabilities and improve its auditing procedures to identify drug-generated cash. Despite these efforts, counternarcotics efforts continue to face major obstacles in Mexico because, according to one U.S. law enforcement official, corruption has been part of the social and cultural fabric of Mexico for generations. In addition, the Department of State reported in March 1996 that endemic corruption continued to undermine both policy initiatives and law enforcement operations. Moreover, the Mexican Attorney General stated that addressing the deep-rooted problems of corruption would take all 6 years of President Zedillo’s term in office. Since 1992, the Mexican government has confronted several major crises that have competed with drug control activities for government resources. U.S. officials have stated that these crises, both economic and political, have adversely affected the overall counternarcotics efforts. According to one U.S. official, the Mexican government neither publicly announces nor shares the actual funding levels for its counternarcotics programs with the United States. However, it is evident that a substantial amount of the Mexican government’s attention and resources have been focused on concerns other than counternarcotics. In December 1994, Mexico experienced a major economic crisis—a devaluation of the peso that eventually resulted in a $20-billion U.S. financial assistance package. Further erosion in the peso’s value resulted in a decline to approximately one-half of its pre-crisis value. In addition, the rate of unemployment was 17 percent in October 1995, and it is projected to be 13 percent for 1996. Furthermore, high rates of inflation—projected to range from 27 to 29 percent in 1996—have continued to limit Mexico’s economic recovery. In addition to economic concerns, Mexico had to focus funds and resources in the southern state of Chiapas on its effort to suppress an insurgency movement. In doing so, the government required the use of Mexican military, police, other personnel, equipment, and resources that might otherwise have been used for counternarcotics purposes. Mexico has lacked some of the basic legislative tools necessary to combat drug-trafficking organizations at the law enforcement level. According to the Department of State, the use of wiretaps, confidential informants, and a witness protection program was included in legislation recently passed by the Mexican Congress. These essential tools, according to DEA, have been used by U.S. law enforcement agencies to successfully combat organized crime within the United States. Also, until May 1996, the laundering of drug profits was not a criminal offense in Mexico. U.S. officials in Mexico City told us that enacting strong legislation that criminalizes money laundering and requires the reporting of large currency transactions will not, in and of itself, ensure success in reducing or eliminating money laundering. They estimated that, at best, it will take at least 5 years before substantial reductions in money laundering can occur. They also said that banks and other financial institutions continue to strongly resist the reporting requirements because of the additional costs and administrative burdens of handling and processing the reports. In addition, according to U.S. officials, large numbers of personnel from both the government and the private sector would have to be trained to prepare the currency transaction reports, and the government would need to train qualified financial investigators to monitor and enforce the transaction requirements. Despite the additional costs, administrative burden, and training that would be required, most U.S. and Mexican officials we contacted believe that a reduction in money laundering cannot be accomplished without enacting, implementing, and enforcing such reporting requirements. Moreover, until May 1996, Mexico’s laws lacked sufficient penalties to effectively control precursor chemicals that are used to manufacture methamphetamine. According to U.S. officials, the ineffective penalties encouraged potential traffickers to use Mexico to transship ephedrine, pseudoephedrine, and other chemicals from their manufacturers, many located in Europe, to U.S. and Mexican methamphetamine laboratories. To counter the growing threat posed by these chemicals, the United States encouraged Mexico to adopt strict chemical control laws. The counternarcotics capabilities of the Mexican government to detect and interdict drugs and drug traffickers, as well as to aerially eradicate drug crops, are hampered by aircraft that are sometimes inadequately equipped and by aircraft and equipment that are poorly maintained because of spare parts’ shortages. The Office of the Attorney General and the Mexican Air Force have over 150 aircraft, including F-5 fighter aircraft and UH-1H helicopters, and a variety of equipment for interdiction and eradication operations. According to U.S. officials, many of the F-5 jets have only a small chance of successfully interdicting drug-trafficking aircraft because they do not have operational radar units and are not configured for night-vision operations. Equipment, such as global positioning systems and radios that are used in eradication operations, is frequently inoperable and poorly maintained. In addition to equipment problems, some Mexican pilots, mechanics, and technicians are not adequately trained, thus limiting Mexico’s effectiveness in performing counternarcotics activities. Department of State officials view the Office of the Attorney General’s UH-1H pilots as well-trained and disciplined. However, many F-5 pilots receive only a few hours of proficiency training every month, which is considered not nearly enough to maintain flying skills needed for interdiction. In addition, the officials told us that many mechanics and technicians lack the necessary skills to keep equipment operable because of insufficient training. Relative to the threat posed by narcotics produced in and transported through Mexico and the pivotal role Mexico plays in the success of any U.S. drug control strategy, the size of the U.S. counternarcotics effort in Mexico is extremely small. Before 1992, Mexico was the largest recipient of U.S. counternarcotics assistance, as it received about $237 million between fiscal years 1975 and 1992. In fiscal year 1992, the United States provided about $45 million in assistance that included the provision of excess helicopters, military aviation training, funding of the maintenance of Mexico’s antinarcotics air fleet, construction of a new maintenance facility, support for the manual and aerial eradication of marijuana and opium poppy, demand reduction and education programs. In early 1993, the Mexican government assumed nearly all the costs associated with the counternarcotics effort in Mexico. Since then, U.S. assistance has sharply declined and, in fiscal year 1995, amounted to only $2.6 million, most of which was for spare helicopter parts. With the November 1993 issuance of Presidential Decision Directive Number 14, the United States changed the focus of its international drug control strategy from interdicting cocaine as it moved through the transit zone of Mexico and the Caribbean to stopping cocaine in the source countries of Bolivia, Colombia, and Peru, before the drug could reach the transit zone. To accomplish this, drug interdiction resources were to be reduced in the transit zone while, at the same time, increased in the source countries. As discussed in our April 1996 report, DOD and other agencies involved in drug interdiction activities in the transit zone began to see major reductions in their drug interdiction resources and capabilities in fiscal year 1993. Table 1 shows the funding levels for those agencies and the reductions that have occurred since issuance of the presidential directive. According to the Department of State, U.S. efforts in Mexico are guided by an interagency strategy developed in 1991. The strategy focused on strengthening the political commitment and institutional capability of the Mexican government, targeting major drug-trafficking organizations, and developing operational initiatives, including the interdiction of drugs. Key components of the strategy were dependent upon Department of State funding, which was reduced in January 1993 when the Mexican government assumed most counternarcotics costs. Since then, the Department of State’s counternarcotics programs and staff in Mexico have experienced major reductions. For example, the Narcotics Affairs Sectionhas received no new program funding since fiscal year 1992, and the size of its staff has been reduced from 17 to 7. According to U.S. officials, the Narcotics Affairs Section has been operating on unexpended prior-year and pipeline funds. In contrast, U.S. Customs Service and DEA operations in Mexico have not been reduced because their programs consist primarily of the costs of (1) salaries for U.S. employees, (2) equipment used by U.S. personnel, and (3) the development of drug-related information and intelligence. Despite the virtual absence of a U.S. counternarcotics assistance program in Mexico during the past 3 years, the United States has provided some limited training and equipment to the Mexican government. For example, DOD recently provided $1.8 million in emergency spare parts to support helicopters that had been provided previously by the United States. In addition to the U.S. programs discussed above, the United States provides indirect support for counternarcotics efforts in Mexico. This support includes sharing with Mexican officials the results of some DOD and Customs detection and monitoring activities in South America and Central America, and some data developed by the counternarcotics intelligence community. According to officials at the U.S. Embassy in Mexico City, reductions in the size of the U.S. counternarcotics program have resulted in corresponding decreases in the number of staff available to monitor how U.S.-provided helicopters and other types of U.S. assistance are being used. To ensure that U.S.-provided military assistance is properly maintained and not misused, section 505 of the Foreign Assistance Act of 1961, as amended, sets forth certain assurances that recipient governments must make before the United States can transfer defense-related commodities and services. Among other things, these assurances permit continued U.S. access to the asset, provide for the security of the asset, and prevent the sale of the asset without U.S. approval. The Mexican government, however, has objected to direct U.S. oversight requirements. In some instances, the Mexican government has refused to accept assistance that was contingent on its signing such an agreement. In other instances, this position resulted in lengthy negotiations between the two countries to develop agreements that satisfied the requirements of section 505 and were sensitive to Mexican concerns about national sovereignty. As we reported in 1993, these delays resulted in Mexico receiving only about 60 percent of the $43 million in emergency U.S. counternarcotics assistance authorized in 1990 and 1992. Before the Mexicanization policy, the Department of State employed several advisers who were stationed at the aviation maintenance center in Guadalajara and the pilot training facility in Acapulco. One of their duties was to monitor the use of the numerous U.S.-provided helicopters, which are dispersed throughout Mexico, and the inventory of aviation spare parts. The advisers would periodically report their end-use monitoring observations to the Narcotics Affairs Section at the U.S. Embassy in Mexico City. The advisers and embassy personnel also discussed their observations with representatives from Bell Helicopter, which the Department of State had contracted to maintain the Mexican counternarcotics air fleet. With the advent of the Mexicanization policy, the number of State Department Foreign Service and contract personnel was greatly reduced and the aviation maintenance contract was awarded by the Mexican government. As a result, the State Department currently has fewer personnel in the field to review operational records and monitor how the 30 U.S.-provided helicopters are being used. According to U.S. officials, the embassy relies heavily on biweekly reports submitted by the Mexican government that typically consist of a map of Mexico with the state to which a helicopter is deployed highlighted and a listing of helicopters that are inoperative at the time of the report. Unless they request specific operational records, U.S. personnel have little way of knowing if the helicopters are being properly used for counternarcotics purposes or are being misused. Embassy officials told us that helicopter operational records have been requested and received on only one occasion in the past 8 months to provide information to visiting U.S. officials. Drug traffickers have traditionally used aircraft to move drug shipments from Colombia to the staging areas of Mexico. To respond to aircraft movements, DOD has devoted extensive resources to detecting and monitoring suspicious aircraft as they fly from South America to staging areas outside of the United States. The 1993 change in the U.S. drug interdiction strategy reduced the detection and monitoring assets in the transit zone. According to officials at the U.S. Embassy in Mexico City, this reduction creates a void in the radar coverage, and some drug-trafficking aircraft are not being detected as they move through the eastern Pacific Ocean. As an example, the embassy cited the November 1995 flight of a Caravelle cargo jet to Baja California. The jet reportedly contained 6 to 10 tons of cocaine and U.S. officials did not know that it was a drug-related flight until 2 days after it landed. DOD officials acknowledge that radar voids have always existed throughout the transit zone and the eastern Pacific area. These voids are attributable to the vastness of the Pacific Ocean and the limited range of ground- and sea-based radars. As a result, DOD officials believe that existing assets must be used in a “smarter” manner rather than flooding the area with expensive vessels and ground-based radars, which are not currently available. In Mexico, U.S. assistance and DEA activities have focused primarily on interdicting trafficking aircraft as they deliver their drug cargoes. However, as discussed previously, traffickers are increasingly using commercial and noncommercial maritime conveyances to move drugs into Mexico. Commercial maritime smuggling primarily involves moving drugs by containerized cargo ships. Noncommercial maritime smuggling involves either “mother ships” that depart Colombia and rendezvous with either fishing vessels or smaller craft that, in turn, smuggle cocaine into a Mexican port, or “go-fast” boats that depart from Colombia and make a direct run to Mexico’s Yucatan Peninsula. According to officials at the U.S. Embassy in Mexico City, about two-thirds of the cocaine currently entering Mexico is transported by maritime means. Efforts to address the maritime movement of drugs into Mexico are minimal, when compared to the increasing prevalence of this mode of trafficking. According to officials at the U.S. Embassy, the Mexican government is developing a port inspection unit and the Mexican Navy is involved in patrolling the Mexican coast and navigable rivers, boarding suspect vessels, and eradicating illicit crops in coastal regions. The U.S. program for addressing this problem is also small and consists mainly of monitoring some ship movements and providing training to Mexican naval personnel. The U.S. program is based on prior explicit intelligence on the movement of drug carrying vessels. DOD officials told us that without prior intelligence, the detection and monitoring of ships is impossible since thousands of fishing, commercial, and other vessels are found in sea lanes between Colombia and Mexico daily. Department of State officials believe that Mexican maritime interdiction efforts would benefit from training offered by the Customs Service and the Coast Guard in port inspections and vessel boarding practices. However, according to DOD, Mexican law and custom have limited the amount of interaction between the Mexican Navy and these two U.S. agencies in the past. Department of State officials note that the degree to which the Mexican Navy becomes involved in drug control efforts will be an indicator of the political will of the country to address the drug-trafficking problem. Since our June 1995 testimony, a number of events have occurred that could affect future drug control efforts by the United States and Mexico. First, the importance of drug control issues at the U.S. Embassy in Mexico City has been elevated, and the embassy has developed a drug control plan that focuses the efforts of all U.S. agencies in Mexico on specific goals and objectives. Second, the Mexican government has enacted legislation that strengthens fiscal regulations governing financial institutions and other legislation aimed at reducing money laundering. Third, according to U.S. officials, the Mexican government has signed a mutually acceptable section 505 transfer agreement that will cover future military equipment transfers. Fourth, the United States and Mexico have created a framework for increased cooperation and the development of a joint counternarcotics strategy. The U.S. Embassy in Mexico City elevated counternarcotics from the fourth highest priority—its 1995 ranking—in its Mission Program Plan for Mexico to a top priority, which is shared with the promotion of U.S. business and trade. The U.S. Ambassador to Mexico told us that, because the immediacy of the North American Free Trade Agreement and the U.S. involvement in the financial support program for the Mexican economy have subsided, he has been able to focus a substantial amount of his attention on counternarcotics issues since mid-1995. In July 1995, the U.S. Embassy in Mexico City developed a detailed embassywide counternarcotics plan for U.S. efforts in Mexico. The plan involves the activities of all agencies involved in counternarcotics activities at the embassy and focuses on (1) disrupting and dismantling Mexican drug cartels and their political allies, (2) reducing money laundering, (3) strengthening Mexican institutions, and (4) interdicting drug shipments and eradicating illicit crops. The plan also identifies several programs that the embassy believes will lead to attaining these goals, as well as specific program milestones and measurable objectives, and sets forth funding levels and milestones for measuring progress. The embassy estimated that it will require $5 million in Department of State funds to implement this plan during fiscal year 1996. However, according to State Department officials, only $1.2 million in counternarcotics funds will be available for efforts in Mexico during fiscal year 1996. Of this amount, about $800,000 is expected to be used to support the Narcotics Affairs Section and $400,000 is to fund a program to assist Mexico’s judicial system. According to State Department officials, the fiscal year 1997 budget request includes $5 million for the Department of State’s narcotics control efforts in Mexico. Senior Department of State officials do not believe there is a conflict between the policy of reducing the level of resources in the transit zone outlined in the presidential directive and current efforts to increase drug interdiction assistance and resources to Mexico. These officials told us that the United States needs to pay special attention to drug control efforts in Mexico because (1) Mexico is the staging area for drugs entering the United States, (2) the influence of drug-trafficking organizations in Mexico has increased, and (3) the borders are relatively easy to cross. After taking office in December 1994, President Zedillo declared drug trafficking “Mexico’s number one security threat.” As such, President Zedillo advocated legislative changes that could improve Mexico’s ability to combat drugs and drug-related crimes. During the session that ended on April 30, 1996, the Mexican Congress enacted legislation that could improve some of Mexico’s counternarcotics capabilities. Some of the newly enacted legislation is effective immediately and includes provisions that make money laundering a criminal offense within Mexico’s penal code. However, other legislation to provide Mexican law enforcement agencies with some essential tools needed to arrest and prosecute drug traffickers and money launderers requires amending the Mexican constitution. These tools include the use of electronic surveillance and other modern investigative techniques that, according to U.S. officials, are very helpful in attacking sophisticated criminal organizations. Department of State officials told us that it appears likely that the amendments will be ratified in the near future—maybe as soon as the end of June 1996. To date, the Mexican Congress has not addressed several other key issues that would support its counternarcotics efforts. These issues include a requirement that all financial institutions report large cash transactions through currency transaction reports. Although some U.S. officials disagree on the value of such reports, none dispute the point that currency transaction reports are useful tools that could deter and reduce money laundering. According to U.S. officials, various U.S. government agencies are working closely with Mexican officials to address the issue of currency transaction reports. However, the officials acknowledged that, even if legislation requiring the use of currency transaction reports is enacted, it will take the Mexican government up to 5 years or longer before the laws can be fully implemented because of the extensive administrative procedures and training that would be required. To follow up on mutual concerns discussed during the U.S. Secretary of Defense’s October 1995 visit to Mexico, military and diplomatic representatives of the two countries met in San Antonio, Texas, in December 1995. According to a U.S. participant at this meeting, representatives of the Mexican government proposed that an agreement be developed for future transfers of military equipment. With such an agreement, equipment could be quickly transferred to the Mexican government and the lengthy delays encountered in the past avoided. U.S. officials view this as an indication that the Mexican government and its military components are committed to stopping the flow of drugs through Mexico. According to U.S. officials, a formal agreement was signed in mid-April 1996, and the United States announced shortly thereafter its intention to transfer a number of helicopters and spare parts to the Mexican Air Force to enhance its role in interdiction and support for law enforcement activities. Twenty UH-1H helicopters are scheduled to be transferred in fiscal year 1996 and up to 53 in fiscal year 1997. According to the Department of State, details about how the pilots will be trained, as well as how the helicopters will be operated, used, and maintained, are being worked out. In March 1996, Presidents Clinton and Zedillo established a high-level contact group to better address the threat narcotics poses to both countries. The Director of the Office of National Drug Control Policy (ONDCP) co-chaired the first contact group meeting in Mexico City in late March, which met to review drug control policies, enhance cooperation, develop new strategies, and begin to develop a new plan of action. At the conclusion of this meeting, the contact group issued a 10-point joint communique that called for action, such as developing a joint antinarcotics strategy, increasing counternarcotics cooperation, and implementing laws to criminalize the laundering of drug profits. Binational working groups have been formed to plan and coordinate implementation of the contact group’s initiatives. A follow-up meeting is scheduled during the summer of 1996 in Washington, D.C. According to ONDCP officials, the joint antinarcotics strategy is expected to be completed in late 1996. ONDCP and DEA provided comments on a draft of this report (see apps. I and II); the Departments of State and Defense provided oral comments; and the Department of Justice provided informal comments. ONDCP and the Departments of State and Defense generally agreed with the report’s content and major conclusions. ONDCP, in commenting on reduction in interdiction resources available for activities in the transit zone and source countries, stated that these reductions were largely the result of congressional action. DEA, however, raised concerns that the draft report did not accurately reflect the many positive counternarcotics initiatives undertaken by the governments of Mexico and the United States. We, consequently, updated the report to reflect Mexican legislative initiatives and bilateral efforts. We also made changes to reflect additional information provided by the Department of Justice, as well as other agencies. To obtain information for this report, we spoke with appropriate officials and reviewed planning documents, studies, cables, and correspondence at DOD and the Department of State, the U.S. Customs Service, DEA, the Federal Bureau of Investigation, and ONDCP in Washington, D.C. In addition, at the U.S. Embassy in Mexico City, Mexico, we interviewed the Ambassador and the Deputy Chief of Mission. We also interviewed responsible officials from the Narcotics Affairs, Political, and Economic Sections; the Defense Attaché Office; the Military Liaison Office; the Information Analysis Center; DEA; the Federal Bureau of Investigation; the U.S. Customs Service; and the Department of the Treasury. We also attended various drug-related meetings and reviewed documents prepared by U.S. Embassy personnel. To obtain the views of the Mexican government, we met with representatives of the Mexican Embassy in Washington, D.C. In Mexico City, Mexico, we met with the Mexican Secretary of Foreign Relations; the Deputy Foreign Minister for North American Affairs; the Coordinator for Counternarcotics Programs (Secretaria de Relaciones Exteriores); the Deputy Attorney General (Procuraduria General de la Republic Sub-Procurador Juridico); the Deputy Finance Minister (Secretaria de Hacienda y Credito Publico); and representatives of the Ministry of Defense. We also visited the Mexican Attorney General’s aircraft maintenance facility in Mexico City, Mexico, where we met with Mexican government officials responsible for maintaining the 30 U.S.-provided UH-1H helicopters and the Mexican air interdiction fleet. At the maintenance facility, we also met with U.S. officials responsible for developing a spare parts inventory system for the Office of the Attorney General. Information on Mexican law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We conducted our review from January through June 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to other congressional committees; the Secretaries of State and Defense; the Attorney General; the Administrator, Drug Enforcement Administration; and the Directors of the Office of National Drug Control Policy and Federal Bureau of Investigation. Copies will also be made available to other interested parties upon request. If you or your staff have any questions concerning this report, I can be reached on (202) 512-4268. The major contributors to this report were Allen Fleener and George Taylor. The following are GAO’s comments on the Drug Enforcement Administration’s (DEA) memorandum dated June 3, 1996. 1. The report text has been modified to reflect this information. 2. We believe that the report presents an accurate portrayal of actions taken by the Mexican government. 3. This discussion has been deleted from the final report. 4. We presented information from 1992 to illustrate changes that have taken place since the institution of Mexican efforts to implement their own counternarcotics policy. 5. We have discussed this issue with DEA and the situation is currently under review. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed counternarcotics activities in Mexico, focusing on: (1) the nature of the drug-trafficking threat from Mexico; (2) Mexican government efforts to counter drug-trafficking activities; (3) the U.S. strategy and programs intended to stem the flow of illegal drugs through Mexico; and (4) recent initiatives by the United States and Mexico to increase counternarcotics activities. GAO found that: (1) Mexico continues to be a major transit point for cocaine, heroin, marijuana, and methamphetamine entering the United States; (2) drug traffickers have changed their preferred mode of transportation for moving cocaine into Mexico, decreasing the use of aircraft and increasing the use of maritime vessels, which are currently used to move an estimated two-thirds of the cocaine entering Mexico; (3) Mexico eradicated substantial amounts of marijuana and opium poppy crops in 1995; (4) however, U.S. and Mexican interdiction efforts have had little, if any, impact on the overall flow of drugs through Mexico to the United States; (5) the current Mexican government appears committed to fighting drug trafficking, but, according to U.S. officials, is hampered by pervasive corruption of key institutions, economic and political problems, and limited counternarcotics and law enforcement capabilities; (6) the current U.S. strategy in Mexico focuses on strengthening the Mexican government's political commitment and institutional capability, targeting major drug-trafficking organizations, and developing operational initiatives; (7) in late 1993, the United States revised its international cocaine strategy from focusing on intercepting drugs as they move through the transit region of Central America, Mexico, and the Caribbean to stopping cocaine at its production source in South America; (8) U.S. counternarcotics activities in Mexico and the transit zone have declined since 1992; (9) multiple-agency drug interdiction funding for the transit zone, including Mexico, declined from about $1 billion in fiscal year (FY) 1992 to about $570 million in FY 1995; (10) the U.S. assistance program in Mexico has been negligible since Mexico initiated its policy of refusing nearly all U.S. counternarcotics assistance in early 1993; (11) staffing cutbacks have limited U.S. capabilities to monitor previously funded U.S. assistance; and (12) since GAO's June 1995 testimony, several events have occurred that could greatly affect future drug control efforts by the United States and Mexico: (a) drug control issues have been elevated in importance at the U.S. embassy and a drug control operating plan with measurable goals has been developed for U.S. agencies in Mexico; (b) the Mexican government has recently signaled a willingness to develop a mutual counternarcotics assistance program; (c) the Mexican government has taken some action on important law enforcement and money laundering legislation; and (d) the United States and Mexico have created a framework for increased cooperation and are currently developing a new binational strategy.
SEC’s financial statements, including the accompanying notes, present fairly, in all material respects, in conformity with U.S. generally accepted accounting principles, SEC’s assets, liabilities, net position, net costs, changes in net position, budgetary resources, and custodial activity as of, and for the fiscal years ended, September 30, 2007, and September 30, 2006. However, misstatements may nevertheless occur in other financial information reported by SEC and may not be prevented or detected because of the internal control deficiencies described in this report. As disclosed in footnote 1.C. to SEC’s financial statements, in fiscal year 2007, SEC changed its method of accounting for user fees collected in excess of current-year appropriations. Because of the material weakness and significant deficiencies in internal control discussed below, SEC did not maintain effective internal control over financial reporting as of September 30, 2007, and thus did not have reasonable assurance that misstatements material in relation to the financial statements would be prevented or detected on a timely basis. Although certain compliance controls should be improved, SEC maintained, in all material respects, effective internal control over compliance with laws and regulations as of September 30, 2007, that provided reasonable assurance that noncompliance with laws and regulations that could have a direct and material effect on the financial statements would be prevented or detected on a timely basis. Our opinion on internal control is based on criteria established under 31 U.S.C. § 3512(c)(d), commonly referred to as the Federal Managers’ Financial Integrity Act (FMFIA) and the Office of Management and Budget (OMB) Circular No. A-123, Management Accountability and Control. During this year’s audit, we identified significant control deficiencies in SEC’s financial reporting process, which taken collectively, result in more than a remote likelihood that a material misstatement of the financial statements will not be prevented or detected. Therefore, we considered the combination of the following control deficiencies to collectively constitute a material weakness in SEC’s financial reporting process: period-end financial reporting process, disgorgements and penalties accounts receivable, accounting for transaction fee revenue, and preparing financial statement disclosures. In addition to the material weakness discussed above, we identified three significant deficiencies in internal control, which although not material weaknesses, represent significant deficiencies in the design or operation of internal control. Although we are considering these issues separately from the material weakness described above, they nevertheless adversely affect SEC’s ability to meet financial reporting and other internal control objectives. These deficiencies concern property and equipment, and accounting for budgetary resources. In our prior year audit, we reported on weaknesses we identified in the areas of SEC’s (1) recording and reporting of disgorgements and penalties, (2) information systems controls, and (3) property and equipment controls. During fiscal year 2007, SEC improved its controls over the accuracy, timeliness, and completeness of the disgorgement and penalty data and used a much improved database for the initial recording and tracking of these data. However, the processing of these data for financial reporting purposes is still done through a manual process that is prone to error. We found that the internal controls that compensated for the manual processing of the related accounts receivable balances in fiscal year 2006 were not effective in fiscal year 2007. This issue is included in the material weakness in SEC’s financial reporting process for fiscal year 2007. SEC continues to make progress in resolving the information security weaknesses. Previously identified weaknesses, though, still need to be addressed, along with new weaknesses we found during this year’s audit. Therefore, we consider information security to be a significant deficiency as of September 30, 2007. In addition, we continued to identify the same weaknesses in controls over property and equipment during this year’s audit, and therefore, we considered this area to be a significant deficiency as of September 30, 2007. Although SEC had one material weakness and three significant control deficiencies in internal control, SEC’s financial statements were fairly stated in all material respects for fiscal years 2007 and 2006. However, the weaknesses in internal control noted above may adversely affect decisions by SEC management that is based, in whole or in part, on information that is inaccurate because of this weakness. In addition, unaudited financial information reported by SEC, including performance information, may also contain misstatements resulting from these weaknesses. We will be reporting additional details concerning the material weakness and the significant deficiencies separately to SEC management, along with recommendations for corrective actions. We will also be reporting less significant matters involving SEC’s system of internal controls separately to SEC management. During this year’s audit, we found control deficiencies in SEC’s period-end financial reporting process, in its calculation of accounts receivable for disgorgements and penalties, in its accounting for transaction fee revenue, and in preparing its financial statement disclosures. We believe these control deficiencies, collectively, constitute a material weakness. SEC’s financial management system does not conform to the systems requirements of OMB Circular No. A-127, Financial Management Systems. Specifically, Circular No. A-127 requires that financial management systems be designed to provide for effective and efficient interrelationships between software, hardware, personnel, procedures, controls, and data contained within the systems. Circular No. A-127 further states that financial systems must have common data elements, common transaction processing, consistent internal controls, and efficient transaction entry, and that reports produced by the systems shall provide financial data that can be traced directly to the general ledger accounts. SEC’s period-end financial reporting process for recording transactions, maintaining account balances, and preparing financial statements and disclosures are supported to varying degrees by a collection of automated systems that are not integrated or compatible with its general ledger system. These automated systems’ lack of integration and compatibility require that extensive compensating manual and labor-intensive accounting procedures, involving large spreadsheets and numerous posting and routine correcting journal entries, dominate SEC’s period-end financial reporting process. Some of SEC’s subsidiary systems, such as that for property and equipment and for disgorgements and penalties, do not share common data elements and common transaction processing with the general ledger system. Therefore, intermediary information processing steps, including extensive use of spreadsheets, manipulation of data, and manual journal entries, are needed to process the information in SEC’s general ledger. This processing complicates review of the transactions and greatly increases the risk that the transactions are not recorded completely, properly, or consistently, ultimately affecting the reliability of the data presented in the financial statements. Our identification this year of errors in SEC’s calculation of disgorgement and penalty accounts receivable, discussed below, illustrates this risk. The risk to data reliability is further increased because basic controls over electronic data, such as worksheet and password protection, change history, and controls over data verification, such as control totals and record counts, were not consistently used during the data processing between the source systems and the general ledger. In addition, currently, SEC’s general ledger has several unconventional posting models and other limitations that prevent proper recording of certain transactions. As a result, SEC’s year-end reporting process requires extensive routine correcting journal entries to correct errors created by incorrectly posted transactions in its general ledger. We also noted that SEC’s documentation used to crosswalk individual accounts to the financial statement line items contained an incorrect routing to a line item on SEC’s Statement of Budgetary Resources for SEC’s year-end financial statement preparation process, which caused a material error in SEC’s draft financial statements. Also, SEC did not have detailed written documentation of its methodologies and processes for preparing financial statements and disclosures, increasing the risk of inconsistent and improper reporting and the risk that disruptions and error may arise when staff turnover occurs. As part of its enforcement responsibilities, SEC issues orders and administers judgments ordering, among other things, disgorgements, civil monetary penalties, and interest against violators of federal securities laws. SEC recognizes a receivable when SEC is designated in an order or a final judgment to collect the assessed disgorgements, penalties, and interest. At September 30, 2007, the gross amount of disgorgements and penalties accounts receivable was $330 million, with a corresponding allowance of $266 million resulting in a net receivable of $64 million. In our reviews of the interim June 30, 2007, and year-end September 30, 2007, balances of accounts receivable for disgorgements and penalties, we found errors in SEC’s spreadsheet formulas resulting in overstatements of these receivable balances for both periods. These errors consisted of incorrectly changed spreadsheet formulas that affected the final calculated balances. SEC subsequently detected and corrected the June 30 errors, but then made different spreadsheet calculation errors in the year-end balances as of September 30, 2007, which we detected as part of our audit. SEC made adjustments to correct the errors, which were not material. However, SEC’s process for calculating its accounts receivable for disgorgements and penalties presents a high risk that significant errors could occur and not be detected. The main cause of these errors is the breakdown this year in the manual controls that were intended to compensate for the lack of an integrated accounting system for disgorgements and penalties, as discussed above. Specifically, although the journal entries posting the amounts to the general ledger were reviewed, this review did not extend to the preparation of the spreadsheet SEC used to document the accounts receivable calculation at June 30 and September 30, 2007, and therefore, was not sufficient to detect significant spreadsheet formula errors. As one of its sources of revenue, SEC collects securities transaction fees paid by self-regulatory organizations (SRO) to SEC for stock transactions. SRO transaction fees are payable to the SEC twice a year –in March for the previous months September through December, and in September for the previous months January through August. Since the SROs are not required to report the actual volume of transactions until 10 business days after each month end, SEC estimates and records an amount receivable for fees payable by the SROs to SEC for activity during the month of September. At September 30, 2007, SEC estimated this receivable amount at $100.6 million. Based on information SEC received in mid-October concerning the actual volume of transactions, the amount of claims receivable at September 30, 2007, should have been $74.4 million. In previous years, SEC made adjustments to reflect the actual volume of transactions; however, SEC does not have written procedures to help ensure that this adjustment is made as a routine part of its year-end financial reporting process. We proposed, and SEC posted, the necessary audit adjustment to correct the amount of transaction fee revenue for fiscal year 2007. Statement on Auditing Standards No.1, Codification of Auditing Standards and Procedures, which explains the accounting requirements for subsequent events, requires that events or transactions that existed at the date of the balance sheet and affect the estimates inherent in the process of preparing financial statements should be considered for adjustment to or disclosure in the financial statements through the date that the financial statements are issued. In addition, the concept of consistency in financial reporting provides that accounting methods, including those for determining estimates, once adopted, should be used consistently from period to period unless there is good cause to change. In our review of SEC’s year-end draft financial statement disclosures, we noted numerous errors including misstated amounts, improper break out of line items, and amounts from fiscal year-end 2006 incorrectly brought forward as beginning balances for fiscal year 2007. For example, in its disclosure for Custodial Revenues and Liabilities, SEC improperly excluded approximately $320 million in collections. In another example, for its disclosure on Fund Balance with Treasury, SEC misclassified approximately $90 million into incorrect line items. Also, in its disclosure for Fiduciary Assets and Liabilities, SEC’s beginning balances for Fund Balance with Treasury and for Liability for Fiduciary Activity were each misstated by $8.9 million due to errors in carrying forward ending balances from September 30, 2006. SEC revised the financial statement disclosures to correct the errors that we noted. We believe the cause of these and numerous other errors in the disclosures is due mainly to the lack of a documented timeline and process for completing the fiscal year 2007 financial statements and disclosures, including review of the disclosures. In addition, the cumbersome and complicated nature of SEC’s financial reporting process discussed above did not allow SEC finance staff sufficient time to carry out thorough and complete reviews of the disclosures in light of the November 15 reporting deadline. We also identified three control deficiencies that adversely affect SEC’s ability to meet its internal control objectives. These conditions concern deficiencies in controls over (1) information security, (2) property and equipment, and (3) accounting for budgetary resources, which are summarized below. SEC relies extensively on computerized information systems to process, account for, and report on its financial activities and make payments. To provide reasonable assurance that financial information and financial assets are adequately safeguarded from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, effective information security controls are essential. These controls include security management, access controls, configuration management, physical security, and contingency planning. Weaknesses in these controls can impair the accuracy, completeness, and timeliness of information used by management and increase the potential for undetected material misstatements in the agency’s financial statements. During fiscal year 2007, SEC made important progress in mitigating certain control weaknesses that were previously reported as unresolved at the time of our prior review. For example, SEC developed a comprehensive program for monitoring access activities to its computer network environment, tested and evaluated the effectiveness of controls for the general ledger system, and documented authorizations for software modifications. SEC also took corrective action to restrict access to sensitive files on its servers, change default database accounts that had known or weak passwords, and apply strong encryption key management practices for managing secure connections. Despite this progress, SEC has not consistently implemented certain key information security controls to effectively safeguard the confidentiality, integrity, and availability of its financial and sensitive information and information systems. During this year’s audit, we identified continuing and new information security weaknesses that increase the risk that (1) computer resources (programs and data) will not be adequately protected from unauthorized disclosure, modification, and destruction; (2) access to facilities by unauthorized individuals will not be adequately controlled; and (3) computer resources will not be adequately protected and controlled to ensure the continuity of data processing operations when unexpected interruptions occur. For example, SEC had not yet mitigated weaknesses related to malicious code attacks on SEC’s workstations, had not yet adequately documented access privileges for a major application, and had not yet implemented an effective intrusion detection system. New control weaknesses in authorization, boundary protection, configuration management, and audit and monitoring that we identified this year include for example, the use of a single, shared user account for posting journal vouchers in a financial application, inadequate patching of enterprise databases, and inadequate auditing and monitoring capabilities on its database servers. Lapses in physical security enabled unauthorized network access from a publicly accessible location within SEC Headquarters. In addition, SEC did not have contingency plans for key desktops that support manual processes such as the preparation of spreadsheets. These weaknesses existed, in part, because SEC has not yet fully implemented its information security program. Collectively, these problems represent a significant deficiency in SEC’s internal control over information systems and data. Specifically, the continuing and newly identified weaknesses decreased assurances regarding the reliability of the data processed by the systems and increased the risk that unauthorized individuals could gain access to critical hardware and software and intentionally or inadvertently access, alter, or delete sensitive data or computer programs. Until SEC consistently implements all key elements of its information security program, the information that is processed, stored, and transmitted on its systems will remain vulnerable, and management will not have sufficient assurance that financial information and financial assets are adequately safeguarded from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. We will be issuing a separate report on issues we identified regarding information security concerns at SEC. SEC’s property and equipment consists of general-purpose equipment used by the agency; capital improvements made to buildings leased by SEC for office space; and internal-use software development costs for projects in development and production. SEC acquired approximately $27 million dollars in property and equipment during fiscal year 2007. Similar to our last year’s audit, during the course of testing fiscal year 2007 additions, we noted numerous instances of inaccuracies in recorded acquisition costs and dates for property and equipment purchases, as well as unrecorded property and equipment purchases, and errors in amounts capitalized and amortized for internal-use software projects. In addition, errors were carried forward from the previous year. These systemic errors did not materially affect the balances reported for property and equipment or the corresponding depreciation/amortization expense amounts in SEC’s financial statements for fiscal year 2007; however, these conditions evidence a significant deficiency in control over the recording of property and equipment that affects the reliability of its recorded balances for property and equipment. Specifically, SEC lacks a process that integrates controls over capitalizing and recording property and equipment purchases. For example, SEC does not have a formalized, documented process for comparing quantity and type of item received against the corresponding order for property purchases. In addition, SEC does not have sufficient oversight of the recording of acquisition dates and values of the capitalized property. Further, SEC’s lack of an integrated financial management system for accounting for property and equipment, as discussed above, requires compensating procedures, which were not effective, to ensure that manual calculations, such as those for depreciation and amortization, are accurate. Until it has a systemic process that incorporates effective controls over receiving, recording, capitalizing, and amortizing property and equipment purchases, SEC will not have sufficient assurance over the accuracy and completeness of its reported balances for property and equipment. For fiscal year 2007, SEC incurred $877 million in obligations, which represents legal liabilities against funds available to SEC to pay for goods and services ordered. At September 30, 2007, SEC reported that the amount of budgetary resources obligated for undelivered orders was $255 million, which reflects obligations for goods or services that had not been delivered or received as of that date. In our testing of undelivered order transactions for this year’s audit, we identified several concerns over SEC’s accounting for obligations and undelivered orders. Specifically, we found numerous instances in which SEC (1) recorded obligations prior to having documentary evidence of a binding agreement for the goods or services, (2) recorded invalid undelivered order transactions due to an incorrect posting configuration in SEC’s general ledger, and (3) made errors in recording new obligations and deobligations due to the use of incorrect accounts and by posting incorrect amounts in the general ledger. The majority of exceptions related to these issues, amounting to approximately $76 million, were corrected by SEC through adjusting journal entries. While the remaining uncorrected amounts did not materially affect the balances on the Statement of Budgetary Resources at September 30, 2007, ineffective processes that caused these errors constitute a significant deficiency in SEC’s internal control over recording and reporting of obligations, and puts SEC at risk that the amounts recorded in the general ledger and reported on SEC’s Statement of Budgetary Resources are misstated. Specifically, SEC’s general ledger is not configured to properly post related entries, thereby resulting in the need to routinely correct entries. Extensive reviews of the budgetary transactions, along with significant adjusting journal entries, are needed to compensate for the system limitations. The errors in recording new obligations and deobligations that we found in our audit indicate a lack of effective review over those transactions. Further, SEC does not have policies or internal controls to prevent recording of obligations that are not valid. Recording obligations prior to having documentary evidence of a binding agreement for the goods and services is a violation of the recording statute, and may result in funds being reserved unnecessarily and therefore made unavailable for other uses should the agreement not materialize. In addition, early recording of obligations may result in charging incorrect fiscal year funds for an agreement executed in a later fiscal year. Our tests for compliance with selected provisions of laws and regulations disclosed no instances of noncompliance that would be reportable under U.S. generally accepted government auditing standards or OMB audit guidance. However, the objective of our audit was not to provide an opinion on overall compliance with laws and regulations. Accordingly, we do not express such an opinion. SEC’s Management’s Discussion and Analysis and other accompanying information contain a wide range of data, some of which are not directly related to the financial statements. We do not express an opinion on this information. However, we compared this information for consistency with the financial statements and discussed the methods of measurement and presentation with SEC officials. Based on this limited work, we found no material inconsistencies with the financial statements or nonconformance with OMB guidance. However, because of the internal control weaknesses noted above, misstatements may occur in related performance information. SEC management is responsible for (1) preparing the financial statements in conformity with U.S. generally accepted accounting principles; (2) establishing, maintaining, and assessing internal control to provide reasonable assurance that the broad control objectives of FMFIA are met; and (3) complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the financial statements are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles; and (2) management maintained effective internal control, the objectives of which are the following: Financial reporting: Transactions are properly recorded, processed, and summarized to permit the timely and reliable preparation of financial statements in conformity with U.S. generally accepted accounting principles, and assets are safeguarded against loss from unauthorized acquisition, use, or disposition. Compliance with applicable laws and regulations: Transactions are executed in accordance with (1) laws governing the use of budgetary authority, (2) other laws and regulations that could have a direct and material effect on the financial statements, and (3) any other laws, regulations, or governmentwide policies identified by OMB audit guidance. We are also responsible for (1) testing compliance with selected provisions of laws and regulations that could have a direct and material effect on the financial statements and for which OMB audit guidance requires testing and (2) performing limited procedures with respect to certain other information appearing in SEC’s Performance and Accountability Report. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the financial statements; assessed the accounting principles used and significant estimates made by evaluated the overall presentation of the financial statements; obtained an understanding of SEC and its operations, including its internal control related to financial reporting (including safeguarding of assets) and compliance with laws and regulations (including execution of transactions in accordance with budget authority); obtained an understanding of the design of internal controls related to the existence and completeness assertions relating to performance measures as reported in Management’s Discussion and Analysis, and determined whether the internal controls have been placed in operation; tested relevant internal controls over financial reporting and compliance with applicable laws and regulations, and evaluated the design and operating effectiveness of internal control; considered SEC’s process for evaluating and reporting on internal control and financial management systems under the FMFIA; and tested compliance with selected provisions of the following laws and their related regulations: the Securities Exchange Act of 1934, as amended; the Securities Act of 1933, as amended; the Antideficiency Act; laws governing the pay and allowance system for SEC employees; the Prompt Payment Act; and the Federal Employees’ Retirement System Act of 1986. We did not evaluate all internal controls relevant to operating objectives as broadly defined by the FMFIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to controls over financial reporting and compliance. Because of inherent limitations in internal control, misstatements due to error or fraud, losses, or noncompliance may nevertheless occur and not be detected. We also caution that projecting our evaluation to future periods is subject to the risk that controls may become inadequate because of changes in conditions or that the degree of compliance with controls may deteriorate. We did not test compliance with all laws and regulations applicable to SEC. We limited our tests of compliance to those required by OMB audit guidance and other laws and regulations that had a direct and material effect on, or that we deemed applicable to, SEC’s financial statements for the fiscal year ended September 30, 2007. We caution that noncompliance may occur and not be detected by these tests and that this testing may not be sufficient for other purposes. We performed our work in accordance with U.S. generally accepted government auditing standards and OMB audit guidance. SEC’s management provided comments on a draft of this report. They are discussed and evaluated below and are reprinted in appendix III. In commenting on a draft of this report, SEC’s Chairman said he was pleased to receive an unqualified opinion on SEC’s financial statements. The Chairman discussed SEC’s plans to remediate this material weakness before the end of fiscal 2008, and to address each of the findings and recommendations identified during the audit. The Chairman emphasized SEC’s commitment to enhance its controls in all operational areas and to ensure reliability of financial reporting, soundness of operations, and public confidence in SEC’s mission. The complete text of SEC’s comments is reprinted in appendix III.
Established in 1934 to enforce the securities laws and protect investors, the Securities and Exchange Commission (SEC) plays an important role in maintaining the integrity of the U.S. securities markets. Pursuant to the Accountability of Tax Dollars Act of 2002, SEC is required to prepare and submit to Congress and the Office of Management and Budget audited financial statements. GAO agreed, under its audit authority, to perform the audit of SEC's financial statements. GAO's audit was done to determine whether, in all material respects, (1) SEC's fiscal year 2007 financial statements were reliable and (2) SEC's management maintained effective internal control over financial reporting and compliance with laws and regulations. GAO also tested SEC's compliance with certain laws and regulations. In GAO's opinion, SEC's fiscal year 2007 and 2006 financial statements were fairly presented in all material respects. However, because of a material weakness in internal control over SEC's financial reporting process, in GAO's opinion, SEC did not have effective internal control over financial reporting as of September 30, 2007. Recommendations for corrective action will be included in a separate report. Although certain compliance controls should be improved, SEC did maintain in all material respects effective internal control over compliance with laws and regulations material in relation to the financial statements as of September 30, 2007. In addition, GAO did not find reportable instances of noncompliance with the laws and regulations it tested. In its 2006 report, GAO reported on weaknesses in the areas of SEC's (1) recording and reporting of disgorgements and penalties, (2) information systems controls, and (3) property and equipment controls. During fiscal year 2007, SEC improved its controls over the accuracy, timeliness, and completeness of the disgorgement and penalty data and used a much improved database for the initial recording and tracking of these data. However, the processing of these data for financial reporting purposes is still done through a manual process that is prone to error. GAO found that the internal controls that compensated for the manual processing of the related accounts receivable balances in fiscal year 2006 were not effective in fiscal year 2007. This issue is included in the material weakness in SEC's financial reporting process for fiscal year 2007. Other control deficiencies included in this material weakness concern SEC's period-end closing process, accounting for transaction fee revenue, and preparation of financial statement disclosures. GAO also identified three significant deficiencies in internal control during fiscal year 2007. Although SEC has taken steps to strengthen its information security, some of the weaknesses identified in GAO's previous audit persisted and GAO found new weaknesses during this year's audit. Therefore, GAO is reporting information security as a significant deficiency as of September 30, 2007. In addition, GAO continued to identify the same weaknesses in controls over property and equipment and therefore considers this area a significant deficiency as of September 30, 2007. GAO also identified a new significant deficiency concerning SEC's accounting for budgetary transactions. In commenting on a draft of this report, SEC's Chairman emphasized SEC's commitment to enhance its controls in all operational areas and to ensure reliability of financial reporting, soundness of operations, and public confidence in SEC's mission.
The U.S. manufacturing sector comprises businesses that are engaged in the mechanical, physical, or chemical transformation of materials, substances, or components into new products, including sectors such as machinery, textiles, apparel, food production, and chemicals. However, U.S. policy makers have become focused on competing in high-end, or “advanced manufacturing.” While no consensus definition of advanced manufacturing exists, it refers generally to the production of scientifically- and technologically-intensive products, in which the economic value derives from inputs of knowledge and design more than it reflects traditional inputs such as labor and materials. Robotics, nanomanufacturing, and electric vehicles are examples of advanced manufacturing sub-industries. Statistics present a mixed picture about the health of U.S. manufacturing, both relative to the rest of the U.S. economy and to other countries’ manufacturing sectors. According to data from BLS, manufacturing employment has fallen from 17.6 million workers in 1998 to 11.5 million in early 2010, a decline of over one-third over a period in which total U.S. employment grew somewhat. However, the decline in U.S. manufacturing employment is not a new phenomenon, and a longer-term view shows a steady decline of manufacturing’s share of all American jobs. As figure 1 shows, the percentage of U.S. nonfarm workers in manufacturing has dropped steadily since the end of World War II, from about 35 percent in 1945 to about 9 percent in 2012. Since bottoming out in 2010, manufacturing employment rebounded slowly up to about 12 million workers at the end of 2012. Also, other advanced economies, such as Canada, Germany, Japan, and the United Kingdom, suffered large manufacturing job losses from 1998 to 2011, suggesting that global economic forces have affected manufacturing employment in addition to any factors that may be unique to the United States. Similar to the employment trend, manufacturing has accounted for a decreasing share of U.S. economic output over the last several decades, from about 28 percent of U.S. gross domestic product (GDP) in the early 1950’s to a recent low of 11 percent in 2009 (see fig. 2). Moreover, the decrease in manufacturing’s share of employment and GDP could reflect increasing worker productivity in manufacturing and the emergence and growth of other U.S. industries. According to data from BLS, U.S. manufacturing productivity, measured as output per hour, rose 55.7 percent from 2002 to 2011, exceeded only by the Czech Republic, South Korea, Singapore, and Taiwan among 19 measured countries. Furthermore, after contracting in 2008 and 2009, manufacturing contributed more to the percent change in U.S. GDP than any other industry group in 2010 and 2012, as well as playing a leading role in somewhat weaker GDP growth in 2011. When compared to the manufacturing sectors in other countries, some statistics show that the United States performs well. Figure 3 shows the change in manufacturing value-added for Canada, China, Germany, Japan, South Korea, and the United States from 1998 to 2010, in constant year 2000 U.S. dollars. The figure shows that China and South Korea have experienced a rapid increase in manufacturing production over this period, while U.S. manufacturing value-added has grown about as fast as that in Japan, Germany, and Canada. Some manufacturing experts, however, maintain that official statistics misrepresent the state of U.S. manufacturing because productivity and value-added statistics do not properly account for the value of imported inputs in goods manufactured in the United States. As these imports become cheaper, or as manufacturers shift to lower-cost imported inputs, the value-added of the resulting manufactured good rises, suggesting more manufacturing “production,” even though nothing meaningful may have changed about manufacturing competitiveness. The Information Technology and Innovation Foundation, an innovation policy think tank, estimated in a 2012 report that official statistics overstate productivity growth from 2000-2010 by 122 percent. Not all experts agree on what role, if any, the government should play in supporting manufacturing. Economic theory generally suggests that government intervention into private sector activity is justified by “market failure”—situations in which the private market under- or over-produces a good because private interests differ from society’s. Those supportive of enhancing productivity in manufacturing suggest that government policy should target the sector in order to remedy market failures that may hinder innovation—the development and application of new knowledge. Innovation underpins improvements in the way capital and labor are combined to create new products and increase productivity. This makes it critical for the broader economy and particularly important for manufacturing. An important element of innovation is research and development (R&D), the testing and application of new ideas. R&D is seen as a key source of innovation and its application to new products and technologies. The private sector, however, faces disincentives to investing in R&D— it may be expensive, it often fails, willing firms may lack sufficient finances, and successful R&D may produce benefits that the investing firm cannot capture — leading to possible underinvestment in R&D and underproduction in innovation without government support. These disincentives may be particularly difficult to overcome for small- and medium-sized enterprises (SME). Though innovation policy can address market failure across all sectors of the economy, advocates of targeted innovation policy argue that it may provide particular benefit to manufacturing. They note that the sector depends on continually creating new ideas for products and ways to make those products. They also observe that manufacturing is a significant source of R&D; according to the National Science Foundation, the sector accounted for 70 percent of private-sector spending on R&D in the United States in 2008. In practical terms, to support needed innovation, the government may intervene through various policies, some of which may have a focus on the manufacturing sector. These include: Public support for “basic” R&D in science and engineering, which, while conducted without specific commercial applications in mind, can spur private-sector innovation. The public sector may be well-suited to conducting basic R&D directly, through government scientific agencies, public universities, and other research institutions, because it is unlikely that most private firms would conduct this type of general research without a potentially profitable application in mind. Public support for private-sector “applied” R&D, research that seeks to solve practical problems or develop new products and commercialization. Applied R&D is seen as a key component in helping innovators overcome the so-called “valley of death”, the difficult transition between new ideas and commercially viable manufacturing products or processes. Support for applied R&D could take various forms: Subsidies for private investment in R&D, through direct funding or tax incentives, and assistance with financing for private R&D projects with commercialization potential, which may overcome the difficulty some firms may face in obtaining funding from private financial markets. However, it may be difficult for the government to figure out which firms merit subsidy because of the lack of information or foresight into an individual firm’s growth prospects. Public infrastructure investment that facilitates R&D and knowledge transfer, such as research laboratories, transportation investment, and “knowledge” infrastructure such as broadband telecommunications, the development of measurement techniques and databases, and the dissemination of technical expertise. Experts have referred to such widely-accessible infrastructure or knowledge as the “industrial commons” that provides a base for innovation and production, and see investment in these commons as an important source of new ideas for products or processes and solutions to existing problems. Public support for innovation clusters — regional concentrations of large and small companies that develop creative products and services, along with specialized suppliers, service providers, universities, and associated institutions. Firms in a cluster may be able to share knowledge and transact business at lower cost than if they were far apart, possibly leading to increased innovation. However, the effectiveness of cluster policy has not been established; the formation of successful clusters in the United States, such as California’s Silicon Valley, suggests that government support for clusters may not be necessary. For more on nontariff barriers, see GAO, Export-Import Bank, Reaching New Targets for Environmentally Beneficial Exports Presents Major Challenges for Bank, GAO-10-682 (Washington, D.C.: July 14, 2010); and International Trade: Four Free Trade Agreements GAO Reviewed Have Resulted in Commercial Benefits, but Challenges on Labor and Environment Remain, GAO-09-439 (Washington, D.C.: July 2009). manufactured goods represented 86 percent of all U.S. goods exported and 60 percent of total U.S. exports. Figure 4 provides a summary of some key types of support that governments can provide to support innovation, training, and trade, which can benefit manufacturing and other sectors. In the United States, the federal government has generally taken the lead in supporting basic research, providing the economic framework, and constructing infrastructure. Commerce administers manufacturing programs through sub-agencies such as the National Institute of Standards and Technology (NIST), the Economic Development Administration (EDA), and the International Trade Administration. Other U.S. agencies support manufacturing as part of their program activities, including the Department of Defense, the Department of Energy, National Aeronautics and Space Administration, and the National Science Foundation. Labor administers training programs for job seekers through the Employment and Training Administration. In addition, tax breaks such as the R&D tax credit further benefit manufacturers (although these provisions do not apply exclusively to manufacturers). States and localities have the main responsibility for education and also are most active in promoting regional economic development, including measures that support innovation. See appendix II for more information on recent U.S. manufacturing initiatives. The four countries we analyzed—Canada, Germany, Japan, and South Korea—take varied approaches to government support for manufacturing, with each providing a different mix of programs to support their manufacturing sectors. For example, Canada has started directly supporting SMEs to encourage innovation. Germany has created programs for innovation and maintained long-standing programs to support export promotion and skills training. Recently, Japan’s manufacturing policies have emphasized alternative energy and the production and innovation that come from that sector. Japan also prioritizes providing hands-on assistance to SMEs. South Korea has substantially expanded investments in R&D to strengthen its manufacturing sector. Figure 5 presents key manufacturing statistics for each of these countries and the United States. Recent trends in the Canadian economy, including the rising value of the Canadian dollar to near parity with the U.S. dollar and declining productivity growth, have put pressure on Canada’s manufacturing sector. In 2010, according to the Canadian government, Canada continued to lag behind other advanced economies in terms of business innovation performance despite a high level of federal support for R&D. In response, Canada’s 2010 budget called for a comprehensive review of all federal support for R&D. The resulting report—commonly referred to as the Jenkins report—catalogued a set of 60 R&D programs worth about $5 billion Canadian in fiscal year 2010-2011. The Jenkins report found that Canada’s support for business innovation was heavily weighted toward the Scientific Research and Experimental Development (SR&ED) tax credit, but that the calculation of some SR&ED expenses was highly complex, which resulted in excessive compliance costs for SMEs in particular. The report also found that other countries relied less than Canada on indirect tax incentives to stimulate innovation, and that Canadian federal policy should provide more effective support to innovative firms, particularly SMEs, to help them grow and become competitive. To address these findings, the Jenkins report recommended simplifying the SR&ED and redeploying the savings from this toward more direct support to SMEs in order to encourage innovation. To further expand opportunities for innovation in Canada, the Jenkins report also recommended that the government provide innovative firms with more access to venture capital, and make better use of government procurement by leveraging the government’s substantial purchasing power to create demand for leading-edge goods, services, and technologies from Canadian enterprises. Canada’s 2012 national budget, in turn, contained several changes that acted on the Jenkins report recommendations. According to Canadian budget documents, effective 2014, the SR&ED tax credit will be reduced; the budget of the Industrial Research and Assistance Program, which employs a national network of technical advisors who work directly with SMEs to help them grow through the commercialization of innovative products and services, was increased. The 2012 budget also announced a new $400 million venture capital fund to support innovative start-up firms. To address the report’s recommendation on procurement, the Canadian Innovation Commercialization Program was made permanent in order to assist SMEs in doing business with the government of Canada. Table 1 highlights examples of manufacturing-related programs in Canada. For further information on Canadian programs included in this review, see Appendix III. Despite slow economic growth at the turn of the century and contraction in 2008-2009, the German economy has grown steadily over recent years. According to German officials, this growth has been in part a result of the strength of Germany’s manufacturing sector, which accounts for about 22 percent of GDP. German officials told us that after the recession of 2008-2009, manufacturing recovered relatively quickly in part because of an arrangement between unions, employers, and the government through which (1) employers reduced their employees’ hours to avoid layoffs, and (2) the government subsidized a portion of employees’ lost salaries. According to German officials, this arrangement allowed businesses to continue to operate through the economic downturn, and then expand workers’ hours once the economy recovered. A 2012 OECD report estimates that the agreement may have prevented up to 500,000 layoffs. To make the most of existing growth potential and open new prospects for German industry, the German government issued its High Tech Strategy 2020 in 2006. The strategy guides the specific efforts across national government agencies and programs. Specifically, it states that in order for Germany to become a leader in solving global challenges, the government will need to stimulate R&D in five priority areas: (1) climate and energy, (2) health and nutrition, (3) mobility, (4) security, and (5) communication. The German government has, in turn, recently established several programs to promote innovation in these areas. The High Tech Strategy 2020 provides a framework for recent programs that encourage applied research and innovation, particularly in SMEs, and also for a program that supports business clusters that conduct R&D in the strategy’s five priority areas. According to German officials, SMEs are a significant part of the German economy and have long played a role in German manufacturing. However, the national government has identified innovation as a challenge across the SME sector. In response, according to German officials and German government documents, the German government has in recent years initiated a group of programs intended to strengthen innovation in SMEs. These programs—all initiated in 2006 or later—include the following: The Central Innovation Program for SMEs, which is Germany’s largest program to support innovation in SMEs, provides grant funding to pursue innovative ideas that show high potential for commercialization. The HighTech Grunderfonds program is a public-private venture capital fund that invests in innovative start-up companies. Signo provides federal assistance to SMEs in securing intellectual property for innovative products and helps SMEs file for patents with the German Patent and Trademark Office. According to German officials and German government documents, as part of its High Tech Strategy, Germany also established the Spitzencluster program to continue the national emphasis on innovation by funding business clusters judged through a competitive application process to be the best, or “leading edge” clusters in the country. In addition to these more recent programs, according to representatives of Germany’s Fraunhofer Institutes, skilled Fraunhofer researchers pursue joint applied R&D projects with businesses that result in commercializable processes and products. Germany established the Fraunhofer Institutes, a nationwide network of 60 applied research facilities with research expert staff, in 1949 as part of efforts to rebuild its research infrastructure after World War II, according to Fraunhofer officials. Fraunhofer’s applied research projects include the following categories of specialization: (1) materials and components, (2) microelectronics, (3) information and communications technology, (4) production, (5) light and surfaces, and (6) life sciences. Fraunhofer officials told us that Fraunhofer Institutes are co-located with universities, which allows companies access to skilled researchers. In contrast to Germany’s newer programs to support innovation, Germany’s main national system to support the export of manufactured goods has a much longer history of providing support to the manufacturing sector. According to German officials, Germany is a leading exporter of manufactured technology goods. German officials also told us that Germany’s long-established export promotion organization, the Association of Chambers of Commerce and Industry, brings together an agency of the national government and all exporting businesses to share export information. Germany fosters export activities in two main ways: (1) by selectively establishing partnerships abroad, and (2) by providing assistance for trade fair attendance and participation in trade delegations. In addition to programs in innovation and trade, Germany also maintains a dual training system, which was established in law in 1970 but has existed in practice for centuries, according to German officials. German officials explained that the dual training system—through which German high school-age students complete apprenticeships in skilled trades—is a cooperative effort among business, labor, federal and state government representatives, coordinated by the Federal Institute for Vocational Education and Training. The Federal Institute for Vocational Education and Training, an institute of the national government, is responsible for regularly incorporating stakeholder feedback into the process of creating and updating skills certification standards. The executive board of the Federal Institute for Vocational Education and Training includes representatives from German unions, employers’ associations, federal agencies, and state governments. Because of this role in bringing together stakeholders in the skills education process, the Federal Institute for Vocational Education and Training is often referred to as the “parliament” of vocational education in Germany. Table 2 highlights examples of manufacturing-related programs in Germany. For further information on German programs included in this review, see Appendix III. After two decades of economic stagnation and fallout from the 2011 Fukushima earthquake and nuclear disaster, Japan has made efforts to strengthen its economy—including its manufacturing sector—and improve its global competitiveness. Japan’s manufacturing sector has been recognized in the past for its ability to make incremental improvements to manufactured products—for example, small just-in-time improvements made specifically for a subsequent phase of the manufacturing process— illustrated by the often-copied lean manufacturing practices that a well- known automobile manufacturer developed over several decades. Officials from Ministry of Economy, Trade, and Industry (METI), the country’s main ministry for manufacturing policy, identified genbaryoku— capabilities to find and solve problems in the field—as a unique source of strength in Japan’s manufacturing industry. According to these officials, this capability helped Japan to restore its economy quickly after damage from the earthquake. In the wake of the 2011 Fukushima crisis, many SMEs went out of business, and global companies, including automobile manufacturers, faced delays in delivery of inputs and in production, according to Japanese officials. As a result, METI officials said that the Japanese government and automobile industry started working to establish more diverse and reliable supply networks. In 2007, the Japanese government published a comprehensive innovation plan: the “Innovation 25” initiative, a long-term strategy for innovation in engineering, information technology, and other fields by the year 2025. This initiative established a cabinet-level minister for innovation and called for several new policies, including: 1) reviewing regulations to establish an environment that supports innovation, (2) promoting the use of new technologies in the public sector, and (3) strengthening activities for international standardization. According to the Center for Strategic and International Studies, this plan introduced the concept of an innovation “ecosystem” in Japan, which emphasizes collaboration among universities, research institutes, the private sector, and government—similar to clusters—rather than the private sector acting alone to develop and commercialize innovations. Japan developed its most recent 5-year Science and Technology Basic Plan in 2011. This plan is aimed at reconstruction and revival from the Fukushima disaster and realizing sustainable growth, for example, by focusing on green innovation. The goal of this Basic Plan is to provide a concrete plan for implementing Japan’s comprehensive New Growth Strategy introduced in the same year. As an outgrowth of the third science and technology plan, the Japanese government initiated several regional innovation cluster programs to enhance Japan’s competitiveness. One of these programs, the Industrial Cluster Project, is composed of groups of local SMEs and venture businesses that use research obtained from universities and other institutions. One of 18 such clusters in Japan—the Technology Advanced Metropolitan Area (TAMA) Association—has over 600 entities, including universities, financial institutions, local governments, businesses, and industry groups, according to one TAMA Association official. The TAMA Association supports local SME manufacturers by matching them with larger businesses that have complementary needs at the national, regional, and local levels to improve R&D and commercialization of technology and products. For example, the TAMA Association connects manufacturers in need of a particular type of R&D to university researchers with projects in that field. In response to the 2011 Fukushima nuclear disaster, Japan has intertwined energy issues—especially alternative energy projects—in its manufacturing policy. The national government has laid out detailed alternative energy policies through its 2011 and 2012 comprehensive Rebirth of Japan strategies. Among other things, the 2011 strategy outlines support for: (1) adopting renewable energies; (2) developing R&D hubs consisting of universities, research institutions, and private firms for industrial development and job creation purposes; and (3) adopting electric, heat, and other energy supply systems that make use of regional resources. The 2012 strategy outlines increased R&D for creating innovative green parts and materials, developing green vehicles, and improving battery performance. METI established the Next Generation Vehicle (NGV) Program, a key alternative energy initiative. According to METI officials, NGV’s strategy takes an integrated approach involving six components: (1) development and production of the vehicles; (2) battery R&D and technology; (3) rare metal and resource recycling systems; (4) installation and infrastructure of chargers; (5) vehicle systems; and (6) international standards for battery performance and safety evaluation methods—and associated roadmaps. According to METI officials, NGV identifies diffusion targets for alternative-fuel vehicles and the development of related technologies. For example, one of its goals is to develop advanced batteries for automobiles that will also have other uses, such as powering homes. As part of the NGV Program, Japan’s government, in conjunction with industrial leaders, seeks to influence international technological standards for related manufacturing accessories, including battery performance and chargers, for which various countries are developing competing models. The government also funds alternative energy projects, as well as other R&D intensive private-sector projects with commercial potential, through the New Energy and Industrial Technology Development Organization (NEDO). According to NEDO officials, NEDO connects university researchers and industry to collaborate on joint research, such as R&D in support of batteries and hydrogen fuel cells for electric vehicles. The Rebirth of Japan strategies also include significant support for strengthening SMEs. For example, the 2011 strategy outlines overcoming the “valley of death”—the gap between innovative ideas and commercializable production—by promoting cooperation between industry, academia, and the government; encouraging joint R&D projects; and supporting overseas business for SMEs. The government also encourages SME technological innovation by offering technical and business support through a national network of Public Industrial Technology Research Institutes—known as Kohsetsushi centers. According to Japanese officials, these centers provide SME manufacturers with a range of services including technology guidance; technical assistance and training; networking; testing, analysis, and instrumentation; and access to open laboratories and test beds, and they typically offer technical consultation services free of charge. Kohsetsushi Centers support Japanese SME manufacturers in adopting emerging technologies, including nanotechnology and robotics. For example, the Tokyo Metropolitan Industrial Technology Research Institute (TIRI) serves about a quarter of Tokyo’s 40,000 manufacturers across three locations, primarily by providing services, information, and testing equipment and facilities to SMEs, according to TIRI officials. In addition, the Kawasaki Business Incubation Center rents offices and lab space to SMEs and entrepreneurs and provides some free services, such as introductions to potential partners and funding entities and support for completing applications for government subsidies or loans and establishing a registered corporation, according to Kawasaki Business Incubation Center officials. The center is located in close proximity to a number of larger companies and research institutes, which incubation officials told us helps facilitate collaboration. The center also provides training sessions on topics including machine operation to help companies acquire necessary technical skills. Having these resources nearby helps companies to move from basic R&D to practical applications in commercial products, and eventually to mass production since many of the tools needed for designing and manufacturing are in one place, according to Kawasaki City officials. Table 3 highlights examples of manufacturing-related programs in Japan. For further information on Japanese programs included in this review, see Appendix III. Within the last 50 years, South Korea has shifted from receiving U.S. development assistance to becoming an OECD aid donor to other countries. According to the United States Agency for International Development, it is the only country to make this shift to date. Between 1999 and 2011, South Korean manufacturing output (in current U.S. dollars) has almost tripled. This rise has coincided with an increase in its investment in R&D, from approximately 2.2 percent of GDP in 1999 to approximately 3.4 percent in 2008, according to OECD statistics. As table 4 shows, South Korea’s percentage increase in R&D spending over this period exceeded that of the other countries in our study, and as of 2009, South Korea spent more on R&D as a percentage of GDP than the other countries. The South Korean government has invested in various research institutes, including those that are state-financed, university-based, and private-sector driven. According to Commerce officials in South Korea, every government ministry invests in several research institutes. For example, the Ministry of Science, ICT, and Future Planning supports approximately 25 research institutes, including the Electronics and Telecommunications Research Institute (ETRI), according to ETRI officials. ETRI is a global information technology research institute and the largest government-funded research institute in South Korea—whose work is partly responsible for putting South Korea on the map as a leader in information and communications technology, such as smart phones and mobile computing. As part of South Korea’s 2009 growth strategy, the national government has emphasized its plans to train SMEs, promote R&D, and expand green energy technology development. For example, the government provides testing and standardization equipment and labs that SMEs would not otherwise be able to access through various research institutes, according to officials from the Ministry of Trade, Industry, and Energy (formerly known as the Ministry of Knowledge Economy)—the main ministry for manufacturing policy. South Korea also plans to encourage innovation and help make South Korea a world leader in green technology by turning green energy industries—such as renewables and smart grids—into export industries, and encouraging current industries to become green according to government documents.According to national government officials responsible for coordinating South Korea’s green growth policies, most green growth programs fit within South Korea’s larger manufacturing strategy, and the policy mechanisms that have been used have been integrated into or build on existing programs. These officials stated that existing tax subsidies for emerging industries, including information technology and biotechnology, have recently been extended to green areas. They pointed out that the government provides a R&D tax credit for private firms using green technology: 20 percent of total investment on green technology for large companies, and 30 percent for SMEs. South Korea has also emphasized the development of a network of technoparks—regional innovation centers that provide manufacturing assets, R&D facilities, business incubation, and education and production assistance to industry—to encourage growth and development throughout the country. assists with R&D by encouraging collaboration between industry, academia, research institutes, and local government, according to Daejeon officials. Specifically, it connects SMEs to researchers or universities working on related research. It also supports technology For example, Daejeon Technopark (Daejeon) This initiative is, in part, intended to help fuel development outside of Seoul, where most economic activity is centered. sharing by providing SMEs access to technology, along with the support and expertise of the park’s professional staff. Table 5 highlights examples of manufacturing-related programs in South Korea. For further information on South Korean programs included in this review, see Appendix III. When compared to the United States, the countries in our study offer some key distinctions in government programs to support the manufacturing sector. Based on our comparison of selected U.S. programs, the foreign countries place a stronger emphasis on innovation programs that support commercialization, especially through programs that provide technical support and product development and support for infrastructure and clusters. In contrast, the United States spends a relatively high amount on competitive funding for R&D projects with commercial potential. Within trade policy, countries in our study all provide similar services but there are several differences in how they are delivered. For example, the United States is an acknowledged leader in intellectual property protection, but the United States government plays a less prominent role than Japan in developing technological standards. Regarding training programs, Germany’s national government has a long history of managing a dual training system to provide graduates with vocational training and nationally recognized credentials and help ensure a supply of skilled manufacturing workers. The United States does not have a comparable program on such a scale. However, some federal, local, and private sector entities in the United States are taking steps to provide work-based and academic learning tailored to manufacturers’ needs and develop a framework for nationally portable credentials. In assessing differences among countries in program funding levels, it is important to keep in mind that higher relative funding levels may not necessarily produce better outcomes. While the United States and the four countries we studied all provide support for innovation and R&D, Canada, Germany, Japan, and South Korea have made commercialization a central goal of their innovation programs. Each of the four foreign countries has taken a multi-pronged approach to spur innovation and help manufacturers bridge the “valley of death” between concept and market. The programs they implement to achieve these goals place a particular emphasis on bringing SMEs into the innovation process. Innovation programs abroad incorporate three broad strategies: (1) providing technical support and product development for client firms, especially SMEs; (2) fostering collaboration between manufacturers and researchers, as well as between small and large manufacturers; and (3) providing competitive grants for private-sector R&D efforts with commercial potential. While the United States offers many similar types of programs, the programs we identified offer somewhat less extensive support for technical support and product development than those in some foreign countries, but relatively high funding for R&D grants. Canada, Germany, and Japan have set up national networks of centers that provide a wide range of hands-on technical and business support services to manufacturing firms, especially SMEs. The focus of many of these programs suggests that they see SMEs as a rich potential source of innovation that market barriers, such as the financial risks of conducting R&D, might impede without government support. For example, Japan’s Tokyo Metropolitan Industrial Research Institute (TIRI), one center among the national Kohsetsushi network of 182 centers, offers a wide array of services and facilities to SMEs, including testing services, laboratories for product development, information on international technical standards, and intellectual property support. TIRI also offers collaborative research partners for SMEs to engage in R&D for product and technology development. Germany’s Fraunhofer Institutes also operate an extensive network of nationwide centers—serving both SMEs and large manufacturers—that offer university-affiliated research expertise to clients. According to Fraunhofer officials, product and technology commercialization are central objectives of their centers. Canada’s Industrial Research Assistance Program (IRAP), that country’s national SME support network, emphasizes the role of expert technical advisers in helping clients commercialize their products through expertise with R&D, networking, and business strategy. The United States has a comparable program in the Hollings Manufacturing Extension Partnership (MEP) network of technical support centers aimed at SMEs, which is administered by Commerce’s NIST. MEP operates a national network of 60 centers to provide support to SME manufacturers, focusing on helping manufacturers in five key areas: (1) technology acceleration, (2) supplier development, (3) sustainability, (4) workforce, and (5) continuous improvement. Specifically, MEP centers enter into contracts with companies to deliver technical assistance to improve their manufacturing processes and productivity, expand capacity, adopt new technologies, utilize best management practices, and accelerate company growth. However, MEP officials with whom we spoke said that MEP centers offer a more limited focus on commercialization, and do not typically offer testing equipment or widespread expertise in product commercialization. Instead, MEP may connect client firms to third parties offering specific services. As table 6 shows, Canada, Germany, and Japan invest more money and resources in their technical support programs than the United States does in MEP. According to NIST officials, MEP receives about $100 million in government funding, and two-thirds of its revenues comes from other sources such as client fees, states, or other partner resources. Canada’s IRAP, in comparison, had funding of $143 million (U.S.) in 2011-12, with an expanded budget of $257.6 million for 2012-13, a much higher investment relative to the size of the economy or the manufacturing sector than the United States. Further, according to IRAP officials, it provides its client services for free. MEP’s technical staff number approximately 1,300, a much larger number than IRAP, but Japan’s and Germany’s programs exceed the MEP in funding and the number of technical staff. Canada, Germany, Japan, and South Korea also encourage manufacturing commercialization through programs that facilitate collaboration between manufacturers and researchers. Specifically, several foreign programs we analyzed support collaboration by providing access to facilities and funding for business clusters—almost a literal implementation of investment in the industrial “commons” —with programs that have been in operation longer than those in the United States. Japan’s Kawasaki-region business incubation centers provide office space, research laboratories and testing facilities. South Korea’s Daedeok Innopolis consists of universities, research institutes, government and government-invested institutions, corporate research institutes and venture corporations. These programs may encourage opportunities for applied R&D and product development not only through access to facilities, but also through interaction among companies in close physical proximity to each other. According to program officials, Japan’s TAMA Association, one site among 18 in the country’s Industrial Cluster project, and South Korea’s Daedeok Innopolis help SMEs match technologies they develop with larger companies that may be able to apply these technologies to products they make, or processes for making them, which may increase technology dissemination. Germany’s Spitzencluster program has encouraged cluster formation by providing funding to clusters judged to be among the country’s best, or “leading edge”; the program has awarded three rounds of funding of up to approximately $257 million per round to 15 total selected clusters. Canada offers manufacturers access to research facilities to conduct R&D in various scientific fields through its National Research Council. In the United States, the federal government has recently begun to increase support for clusters. The Small Business Administration (SBA) Regional Innovation Cluster Initiative, a U.S. federal government cluster program piloted in 2010, has funded 10 existing U.S. clusters, with 7 clusters receiving funding of $2.7 million in FY 2012, according to SBA. SBA’s 1-year evaluation of the initiative showed positive results, including over two-thirds of participating businesses reporting development of a new product, and over half commercializing new technology. In 2011, several federal partners, led by the Economic Development Administration, funded the Jobs and Innovation Accelerator Challenge (JIAC), the first interagency cluster initiative. JIAC provided $37 million to 20 existing clusters. Later in 2011, $9 million was awarded to 13 clusters in rural areas. In 2012, the 3rd JIAC awarded $20.2 million to 10 existing clusters focusing on Advanced Manufacturing. Table 7 provides a comparison of spending across countries on cluster support programs. Another way countries support commercialization is through competitive funding programs that evaluate and fund private manufacturing R&D projects with commercial potential. Japan’s New Energy and Industrial Technology Development Organization (NEDO) was established in 1980 to promote the development of new energy technologies but has since broadened its scope to fund industrial R&D projects. NEDO officials said that a typical project they fund would have a budget of $12.5 million for five years. NEDO’s overall budget for 2012 was approximately $1.6 billion. Germany funds R&D through the Central Innovation Program for SMEs, which focuses on SMEs and business-related research establishments cooperating with them. The program funds up to half of a business’s costs for technical support, technology transfer, training, and other activities in the development of a new product or process, and has government funding of about $643 million per year. The Canadian Innovation Commercialization Program (CICP), with funding of approximately $32 million (U.S.) per year, is a federal program that helps companies bridge the pre-commercialization gap for their innovative goods and services, in part by testing innovative goods and services within the Canadian government before taking them to the marketplace. The United States devotes a large amount of money to competitively- awarded R&D funding relative to other countries we studied. SBA administers two large funding programs through the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs. The funds for the SBIR program are provided by federal agencies with an extramural budget of at least $100 million for research or R&D, and for STTR by agencies with extramural research or R&D budgets of at least $1 billion; SBA oversees the programs and the efforts of these agencies. According to SBA officials, the SBIR and STTR programs had combined budgets of approximately $2.5 billion in 2012 to fund awards in successive phases, designed to promote technological innovation and economic growth within small businesses. Generally, the agencies participating in the SBIR and STTR programs offer up to $150,000 to awardees for an initial 6-month period of performance, and those applicants who receive a subsequent phase award typically receive up to $1 million for a 2–year period of performance. SBA officials said that projects are often evaluated for potential commercial applications to the evaluating agency itself, such as the Department of Defense, as opposed to potential demand for the product from the private sector (although this varies by agency). This aspect of SBIR/STTR takes a similar approach to Canada’s CICP in that it uses government procurement as a means for potentially introducing innovative products into the larger market. Table 8 compares funding for some countries’ R&D grant programs. In each of the countries we studied, trade policy is an important part of manufacturing policy, and each country’s approach shares commonalities with the others. Every country we studied—including the United States— focuses on export promotion, harmonization of standards, and protection of intellectual property rights. Canadian, German, Japanese, South Korean, and U.S. export promotion programs offer help in market identification and development. However, there are some differences. For example, in Japan, efforts to promote and harmonize product standards are supported by the government in conjunction with industrial leaders, but in the United States, they are led by the private sector in most cases. All five countries also provide information to help businesses establish or protect intellectual property rights as a way to encourage innovation and help ensure that manufactured goods can be sold abroad. According to the World Trade Organization, in 2011, Canada, Germany, Japan, South Korea, and the United States were among the world’s largest exporters of manufactured goods—accounting for about $3.7 trillion in manufactured exports (or about 32 percent of the global export These countries generally offer similar types of value in this category). export promotion services to domestic businesses, including assistance for participation in trade fairs, participation in trade missions, data and market analytics, and services targeted towards SMEs. However, there are some differences in how they provide these services. For example: According to Canada’s Trade Commissioner Service officials, the Trade Commissioner Service manages the Export USA program, which helps Canadian SMEs understand the specific legal fundamentals of exporting to the United States, Canada’s largest trading partner. According to State and Commercial Service officials at the U.S. embassy in Berlin, Germany’s trade fair system is key to German manufacturers’ success because it helps create awareness of global trends in different sectors, and showcases Germany as a place to do business. International Trade and Tariff Data, in the World Trade Organization’s International Trade and Market Access Data online (includes the ‘manufactures’ sector in the ‘exports’ trade flow category), accessed June 19, 2013, http://www.wto.org. According to TIRI officials, Japan’s Metropolitan Technical Support Network for Export Products—a cooperative initiative of nine prefecture-based research institutes—offers consultation and information on international product standards to SMEs for export products, as well as testing to determine compliance with those standards. According to officials from South Korea’s Korea Trade-Investment Promotion Agency, 99 Korea Business Centers around the world can be used as SME branch offices. The Korea Trade-Investment Promotion Agency also manages logistics centers—operated with UPS or DHL—to facilitate Korean firms’ distribution operations overseas. Canada, Germany, Japan, South Korea, and the United States also differ in the amount of resources they provide for export promotion and the number of locations that their export promotion efforts reach. For example, according to a recent U.S. International Trade Administration study, while all of these countries employ expert staff abroad to assist with the exporting process, the total number of staff varies somewhat, with the United States and Japan employing the fewest export promotion personnel (see table 9). Similarly, although the amount of export promotion funding does not vary greatly across the countries in our study—from $226 to $381 million—the United States spends less on export promotion per $1000 of GDP and per $1000 of exports than many other similarly-situated countries, including Canada, Japan, and South Korea (see Table 10). Further, the four other countries in our review have a single agency primarily responsible for implementing export promotion, whereas the United States has several. For example, in the United States, the Trade Promotion Coordinating Committee (TPCC), an interagency task force, includes 20 agencies that participate in export promotion. Of the 20 TPCC agencies, seven are considered core agencies. In contrast, Canada’s Trade Commissioner Service, Germany’s Association of Chambers of Commerce and Industry, and South Korea’s Korea Trade-Investment Promotion Agency implement export promotion programs in their respective countries. We previously reported that Commerce’s Foreign Commercial Service activities align with relevant National Export Initiative trade promotion priorities, but that in an environment of limited resources, systematic use of economic, performance and activity data could help allocate resources to achieve its goals more efficiently and effectively. Although the United States and Japan both view international standards as an important component of manufacturing policy, they have different mechanisms for setting voluntary standards on industrial products. In the United States, the government primarily coordinates standards on industrial products through the private sector, whereas Japan’s national government plays a more active role in setting and enforcing standards. According to documents from the Japanese Industrial Standards Committee (JISC), a committee of up to 30 knowledgeable experts comprised of members from the national government, private sector, industry associations, and academia, METI administers the national standards system by drafting and enforcing standards-related laws and regulations. According to the committee, four additional government ministries have the ability to set standards with input from the committee. Japan’s Next Generation Vehicle program illustrates the government’s active role in trying to influence international technical standards for manufacturing. According to NIST officials, the South Korean government also plays an active role in trying to influence international technical standards for manufacturing through standards as well as conformity assessment requirements. We did not review mechanisms for setting standards on industrial products in Canada or Germany because these countries’ practices did not meet our selection criteria. enforce them, according to NIST and industry officials. According to NIST officials, hundreds of private sector organizations, including professional societies, trade associations, testing and certifying organizations, and industry consortia develop standards through an open, consensus-based process. Among other things, NIST coordinates the use of private sector standards by federal agencies, states, and local governments to avoid the development of duplicative standards. In addition, NIST scientists and engineers work with the private sector to develop standards that are based on sound science and ensure that the standards are supported by effective measurements, test methods, and appropriate conformity assessment systems. We did not identify any key differences between the four foreign countries and the United States with regard to protecting intellectual property rights. All of the foreign countries we studied promote R&D to create intellectual property and support manufacturers by protecting intellectual property rights through various mechanisms. See appendix III for details on how various programs support intellectual property rights. The United States is an acknowledged global leader in the creation of intellectual property, and has generally advocated strong intellectual property protection. Germany’s national government has maintained a substantial commitment to a dual training system, which helps provide a supply of skilled workers for the manufacturing sector. In particular, Germany’s system has a long history of building public-private partnerships to develop curriculum and standards and providing graduates with nationally-recognized, portable credentials. While not on the same scale as the German dual training system, some federal and local entities in the United States are taking steps to provide a combination of work-based and academic learning to meet manufacturers’ needs through public- private partnerships. In addition, Labor, some U.S. states, and Canada’s federal government have taken steps to encourage participation in their apprenticeship programs to train workers in the skilled trades needed by manufacturers. While the United States does not have a national system to issue industry-recognized credentials, the manufacturing industry, with participation from the federal government, has recently started moving in this direction. Germany’s dual training system facilitates broad consensus among stakeholders in business, labor, and education, which in turn creates a supply of workers with skills needed in the manufacturing sector. The national government enforces the dual training system’s regulations and has coordinated with industry, union, and state government stakeholders to develop skill standards in 350 occupations. One agency, the Federal Institute for Vocational Education and Training, is responsible for conducting education and labor market research, facilitating regular stakeholder coordination among public agencies and private industry associations on needed skills, and managing changes to the system’s standards. The Federal Institute for Vocational Education and Training was established in 1970 and, given its central role in bringing stakeholders together, its board is referred to as the “parliament” of vocational education. Some experts note that this unified approach and cooperative relationship among the various stakeholders are strengths of this system, and representatives of German government and industry cited the system as an important support to the manufacturing sector. Moreover, Germany has a high overall societal acceptance of the need for the dual training system. In addition, support for training and for the skilled trades is deeply embedded in German society, and about 55 percent of high school students enter the dual training system each year. In contrast, the United States and Canada have a more decentralized system of skills training programs, with management of these programs largely devolved to states and localities. In the United States, Labor has a major role in administering a number of federally-funded skills training programs, including those under the Workforce Investment Act of 1998 (WIA), which largely target dislocated workers and economically disadvantaged adults and youth. WIA programs are overseen at the local level by Workforce Investment Boards, committees of local business, labor, and government representatives, with services provided through local American Job Centers. Labor also administers a Registered Apprenticeship Program, which offers assistance in creating on-the-job training programs in accordance with accepted skills certification organizations in relevant disciplines. Registered apprenticeship programs are sponsored on a voluntary basis by individual employers, employer associations, or labor-employer agreements, and are federally administered by Labor in 25 states and by state apprenticeship agencies in 25 states. In addition, the Department of Education plays a major role in supporting career and technical education in community colleges and regional and technical centers through the Carl D. Perkins Career and Technical Education Improvement Act of 2006 (Perkins Act). Unlike Germany, U.S. officials and experts reported that the United States does not have widespread societal support for vocational education and training for the skilled trades, although some federal and local entities are taking steps to target training to meet manufacturers’ needs through public-private partnerships. For example, at the federal level, Labor and the Department of Education are providing $2 billion over 4 years to community colleges around the country through the Trade Adjustment Assistance Community College and Career Training initiative. The grants support partnerships between community colleges and employers to develop instructional programs for workers dislocated by international competition that meet specific industry needs, including the manufacturing industry. At the local level, community and technical colleges provide skills training under WIA and the Perkins Act, and in many areas, work closely with employers to develop customized training in key disciplines where workers are needed. One example of this customized training is a partnership that the German corporation Siemens has established with Central Piedmont Community College in North Carolina in order to create a pipeline of skilled workers for manufacturing plants located in the area. Participating students work at Siemens while taking courses in the college’s mechatronics degree program. Siemens pays each student’s tuition costs, while the participant earns a paycheck and receives company-specific technical training and hands-on experience. Our recent report highlighted similar efforts of Workforce Investment Boards in various locations, including California, Colorado, Illinois, Kansas and Michigan to build public-private partnerships and tailor training programs to meet the specific needs of manufacturers.addition, some states in the United States have taken steps to focus on In skilled trades needed by manufacturers. For example, according to state documents, South Carolina’s Personal Pathways to Success system combines academic training with options for work-based learning in 16 career clusters, including manufacturing. In Canada, apprenticeship programs are primarily administered by the 13 provincial or territorial governments. According to Canadian officials, in recognition of the economic need for apprentices and to increase participation in provincial and territorial apprenticeship programs, the Canadian government offers incentives for continuing and completing an apprenticeship within the country’s Red Seal program, which encompasses 55 trades and includes skills needed in the manufacturing sector. In addition, these officials noted that the Canadian government also offers a tax credit for businesses that hire apprentices and a tax deduction for the purchase of tools by any eligible apprentice. See Appendix III for more information on Canada’s programs. Labor defines a credential as a recognition of an individual’s attainment of measurable technical or occupational skills necessary to obtain employment or advance within an occupation. aims to provide a unified framework to align skills certifications from various industry associations with Labor’s Advanced Manufacturing Competency Model. According to the Manufacturing Institute, the Skills Certification System is intended to establish a comprehensive set of nationally portable, industry-recognized credentials to validate the skills and competencies needed to be productive in any manufacturing environment. Currently, the Skills Certification System endorses industry certifications in several areas, including machining and metalworking; automation; fabrication; mechatronics; and transportation, distribution, and logistics. While federal and private sector entities are collaborating on this effort, the more established German system suggests that it will require a long-term, sustained commitment and coordination between federal and state authorities to bring such an effort to fruition on a large scale. As the United States considers policies to enhance the global competitiveness of its manufacturing sector, the actions other economically-advanced countries have taken to improve their competitive edge in manufacturing are of particular interest. The manufacturing policies and programs in each of the four countries we examined are shaped by each country’s unique political, social, cultural, and economic characteristics and may not be readily applicable to the United States. However, their manufacturing approaches suggest some key areas that the United States may wish to consider as it continues to formulate its manufacturing strategy and programs to carry out that strategy. Each of the foreign countries took a multi-faceted and hands-on approach to spur innovation in ways that are intended to lead to commercialization, suggesting that no one program or mechanism can fully address the challenge of bridging the gap between innovative ideas and manufacturing sales. In addition, we noted the sustained government commitment to managing a national system of vocational skills training and credentialing that facilitates consensus among business, labor, and education. This was particularly evident in Germany. The United States has no corollary, and due to major cultural differences it is unlikely that we would consider a similar system of the same scale. However, some recent examples of public-private partnerships established to target training for the manufacturing sector and develop a set of nationally portable, industry-recognized credentials show how certain aspects of the German dual training system might be applied through incremental actions. More generally, our analysis of the manufacturing programs in the four selected countries shows the broad extent to which U.S. competitors are leveraging the public sector to help their manufacturing industries maintain competitiveness in a rapidly changing global economy. Their programs involve a partnership of government and the private sector, with varying but shared responsibilities for supporting applied R&D and commercialization efforts; facilities for research, testing and production; and expertise and services. Many of these programs go beyond a more traditional government role of setting incentives, establishing regulations, and providing funding. In the end, the best guide to devising U.S. manufacturing policy may be to think about how the mix of existing and proposed federal programs fits into our unique economic context and can provide the most benefit to the economy at large. We sent a draft of this report to Commerce, Labor, and State, and selected draft report sections to SBA, the Department of Education, and the Department of Energy. Commerce, Labor, SBA, and Energy provided technical comments, which we incorporated, as appropriate. State and Education did not have comments. We also sent draft report sections to foreign officials to verify information on foreign programs that support manufacturing, and incorporated technical comments from these officials, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies to the Secretaries of Commerce, Labor, and State, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Andrew Sherrill at (202) 512-7215 or SherrillA@gao.gov; or Lawrance Evans at (202) 512-4802 or EvansL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our key objectives for this report ask: 1. What government strategies and programs have other advanced economies implemented to approach issues similar to those facing manufacturing in the United States? 2. What key distinctions exist between policy approaches to support manufacturing in other advanced economies and those in the United States? We selected four study countries—Canada, Germany, Japan, and South Korea— and the programs that support manufacturing in each country based on several factors. Since we focused on “similarly situated” countries for comparison to the United States, we considered primarily high-GDP countries that have democratic governments. We contacted approximately 20 experts whose career involved studying or advocating for manufacturing policy; these included representatives from industry associations, labor, academia, think tanks, and trade groups. We interviewed these experts about key manufacturing issues and obtained their views about which foreign countries had innovative programs to support manufacturing. We also asked officials from the Departments of Labor (Labor) and Commerce (Commerce) to comment on some of the selected foreign programs recommended by experts. We conducted additional research on the programs in the countries experts mentioned. To make our final country selection, we considered factors such as the number of experts that mentioned a country as a candidate for study and the number and breadth of programs our research indicated each country offered. We then worked with officials from the U.S Department of State (State) stationed in our selected countries and foreign embassy officials to finalize programs for our review, based on our research and the recommendations of the foreign officials. We did not attempt to perform a comprehensive review of programs that support manufacturing in the four selected foreign countries, nor did we seek information in all countries about programs in each of our three key policy categories—innovation, trade, and training. For example, in some countries, we did not examine training or trade programs because through expert opinion, input from U.S. officials, and our literature review, we did not identify programs in these areas as being particularly informative for U.S. policy. To obtain specific program information, we traveled to each country to interview foreign officials overseeing each program. We also analyzed documents with key program information that these officials provided. We did not analyze or review foreign laws or regulations, and relied on program information, including budget information, from sources provided by foreign agency officials and other sources. We also sent report excerpts to foreign officials to verify information on the programs and incorporated technical comments from these officials where appropriate. Moreover, we did not evaluate the effectiveness of any foreign programs. Because we did not conduct original analyses, none of the program descriptions regarding foreign programs in this report should be considered GAO assessments or evaluations of those programs. To identify key differences in manufacturing policies between our selected foreign countries and those in the United States, we synthesized our analyses of the foreign programs we examined to identify common and unique features among them. We then researched comparable programs in the United States, in part based on suggestions from Commerce and Labor, and interviewed staff at agencies administering those programs. We did not attempt to conduct a comprehensive review of U.S. manufacturing policy or programs, nor did we evaluate the effectiveness of U.S. programs. This report uses data obtained from large U.S. and international agencies and from foreign manufacturing agencies. We assessed the reliability of data from the Bureau of Labor Statistics, the Bureau of Economic Analysis, World Bank, and Organisation for Economic Co-operation and Development (OECD) by reviewing literature provided by the organizations regarding their methodology for compiling data, including measures to ensure data quality and comparability across countries. We determined that these data were sufficiently reliable for the purposes of our report. Regarding data provided by foreign agencies, we did not independently attempt to confirm provided data except where documentary evidence provided by those agencies allowed us to do so. However, we did confirm the accuracy of the figures and our use of them by having foreign officials review relevant excerpts of the report.found these data to be sufficiently reliable for our purposes. For data on export promotion, we assessed the reliability of data from the World Trade Organization by reviewing literature provided by that organization regarding its methodology for compiling data, including the use of the international standard system for categorizing exports. We contacted cognizant Commerce officials with respect to an International Trade Administration study that compared and analyzed foreign countries’ export promotion budget levels, and with respect to data on the number of countries where the United States conducts export promotion activities. We reviewed the steps Commerce officials took in collecting and analyzing the data, and we found the data to be sufficiently reliable for our purposes. We obtained data on the number of countries in which Canada, Germany, Japan, and South Korea conduct export promotion activities using their officially released information on their governments’ website. We confirmed these numbers with cognizant Canadian, German, and South Korean officials. We were unable to confirm the number of countries in which Japan conducts export promotion activities with cognizant officials. We believe these sources are sufficiently reliable for the purpose of our report. We conducted this performance audit from March 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In recent years, both Congress and the Administration have taken steps to help define the goals and broad principles of a U.S. manufacturing strategy. The America COMPETES Reauthorization Act of 2010 aimed in part to increase the nation’s R&D investment in science and engineering research and in science, technology, engineering, and mathematics education. This law required the establishment of new programs, including Commerce’s Regional Innovation Program; required the establishment of a committee involved in STEM education; and required the establishment of an Advisory Council on Innovation and Entrepreneurship. In addition, it outlines requirements for various government agencies relating to improving the competitiveness of the manufacturing sector in the United States. It also created an interagency Committee on Technology under the National Science & Technology Council responsible for planning and coordinating federal programs and activities in advanced manufacturing. The act also directs Commerce to (1) analyze taxes, regulations, and the economy; trade and export policies; workforce issues; and barriers to U.S. competitiveness; and (2) develop a 10-year innovation and competitiveness strategy. Furthermore, the America COMPETES Reauthorization Act of 2010 authorized funding for research through the National Science Foundation, the National Institute of Standards and Technology (NIST), the Department of Energy, and Commerce. Likewise, the Administration has proposed a framework for manufacturing and various initiatives to spur manufacturing. In 2009, the Administration put forth “A Framework for Revitalizing American Manufacturing,” which lays out policies in seven areas: (1) providing workers with the opportunity to obtain the skills necessary to be highly productive; (2) investing in the creation of new technologies and business practices; (3) developing stable and efficient capital markets for business investment; (4) helping communities and workers transition to a better future; (5) investing in an advanced transportation infrastructure; (6) ensuring market access and a level playing field; and (7) improving the general business climate, especially for manufacturing. The Administration has put forth several other documents laying out its manufacturing strategy since 2009— including “The National Strategic Plan for Advanced Manufacturing,” which it published in February 2012 to guide Federal programs and activities in support of advanced manufacturing R&D in response to the America COMPETES Reauthorization Act of 2010. In addition, the Administration has established several manufacturing initiatives. First, in 2009, the White House announced the formation of the President’s Council of Advisors on Science & Technology (PCAST), an advisory group of the nation’s leading scientists and engineers who directly advise the President on matters related to U.S. manufacturing. In June 2011, the President launched the Advanced Manufacturing Partnership (AMP), a private-sector-led, national effort that brings together industry, universities, and the federal government to chart a course for investing and developing emerging technologies to create high quality manufacturing jobs and enhance U.S. global competitiveness. In December 2011, the Administration established the White House Office of Manufacturing Policy, which has worked with PCAST and AMP to coordinate policy to enable innovation. In March 2012, the President announced the establishment of the National Network for Manufacturing Innovation (NNMI), one purpose of which is to close the gap between research and development activities and the deployment of technological innovations in domestic production of goods. To accomplish this goal, the NNMI plans to form up to 15 manufacturing innovation institutes around the country to serve as regional hubs of manufacturing excellence. In August 2012, the National Additive Manufacturing Innovation Institute (NAMII) was formally established in Youngstown, Ohio, as the pilot institute under the NNMI infrastructure. NAMII is a public-private partnership with member organizations from industry, academia, government, and workforce development resources, with a goal to transition additive manufacturing technology to the mainstream U.S. manufacturing sector. Another effort in collaboration with the NAMII is the Department of Energy’s Manufacturing Demonstration Facilities, which are collaborative manufacturing communities that share a common R&D infrastructure. The first facility was launched at Oak Ridge National Laboratory in January 2012. This facility will provide equipment, scientists, and engineers to develop new energy-sector technologies for commercial application. In May, 2013, the Administration announced $200 million in funding to open three new manufacturing institutes under the NNMI. The following programs illustrate the ways in which Canada, Germany, Japan, and South Korea have pursued manufacturing goals through innovation, training, and trade policies. We did not independently evaluate these programs. Our descriptions are based on interviews with knowledgeable officials and review of relevant program documentation. We analyzed documents with key information that these officials provided, but did not review or analyze the primary source materials for such program information. In addition, we did not analyze or review any foreign laws or regulations. Because we did not conduct original analyses, none of the program descriptions regarding foreign programs in this report should be considered GAO assessments or evaluations of those programs. We also sent report excerpts to foreign officials to verify information on the programs and incorporated technical comments from these officials where appropriate. The Accelerated Capital Cost Allowance was first introduced in Canada’s 2007 budget. According to Finance Canada officials, the policy allows for a 50 percent depreciation of new investment in machinery and equipment in the manufacturing and processing sector. By allowing a faster tax write- off of eligible investments, this measure may help manufacturers remain competitive in the current global environment. According to Canadian officials, the policy has been extended until 2015. Representatives of the Canadian Manufacturers and Exporters—Canada’s major national association of manufacturers—told us that although they have advocated that the policy become permanent, this has not been achieved. Canadian Innovation Commercialization Program (CICP) The Canadian Innovation Commercialization Program (CICP) was created to bolster innovation in Canada’s business sector and was made permanent in Canada’s 2012 budget, with $95 million for 3 years starting in 2013, and $40 million per year thereafter. CICP helps companies bridge the pre-commercialization gap for their innovative goods and services in several ways: (1) awarding contracts to entrepreneurs with pre-commercial innovations, (2) testing and providing feedback to entrepreneurs on the performance of their goods or services, (3) providing innovators with the opportunity to enter the marketplace with new goods and services, and (4) providing information on how to do business with the Government of Canada to enhance procurement opportunities. The Canadian Innovation Commercialization Program targets innovations in the priority areas of environment, safety and security, health, and enabling technologies (e.g., biotechnology). According to Canadian officials, Canada’s Scientific Research and Experimental Development (SR&ED) tax incentive program supports business research and development. The SR&ED tax incentive program has two components: 1. An income tax deduction, which allows immediate expensing of all eligible expenditures, and; 2. An investment tax credit with the following characteristics: The general rate is 20 percent of qualified expenditures carried out in Canada. An enhanced rate of 35 percent is provided to small and medium- sized Canadian-controlled private corporations on their first $3 million of eligible expenditures. Unused credits earned in a year are generally fully refundable for small and medium-sized Canadian-controlled private corporations on their first $3 million of current expenditures. Currently, eligible expenditures include most of the costs that are directly related to SR&ED, including salary and wages, materials, and overhead, as well as contracts and capital expenditures (other than most buildings).The 2012 budget announced that, effective January 1, 2014, the general rate would be reduced to 15 percent. In addition, the 2012 budget announced that effective the same date, capital expenses (e.g., equipment and machinery) would be eliminated from eligible SR&ED costs. Finance Canada officials estimated $3.6 billion in SR&ED- associated tax expenditures in 2012. Industrial Research Assistance Program (IRAP) Canada’s 2012 budget announced the Canadian government’s intent to double the funding for the Industrial Research Assistance Program (IRAP). IRAP—a federally-funded nationwide network of over 200 Industrial Technology Advisors with sector-specific expertise who consult with SMEs on conducting research and development—had a budget of $258 million in 2012-2013. IRAP advisors assist SMEs in developing, adopting, and adapting technologies as well as incorporating them into competitive products and leading to potential commercialization. According to IRAP officials, over 80 percent of IRAP clients have 50 or fewer employees. According to the 2012 IRAP survey, 62 percent of participating businesses indicated that the program had enhanced their ability to conduct research and development. Canada’s five federal regional development organizations are another source of support for innovation. Each agency covers a specific geographic area. The Federal Economic Development Agency for Southern Ontario— the regional development agency that was established most recently— was provided $1 billion in funds to expend from 2009-2014 in support of local economic development and competitiveness. The Federal Economic Development Agency for Southern Ontario has created programs intended to boost private sector investment in start-up companies, help SMEs collaborate with colleges and universities to commercialize new products and services, and develop new technology. For example: Investing in Business Innovation. The Investing in Business Innovation program boosts private sector investment in start-up businesses and allows for the accelerated development and introduction to market of new products, processes, and practices. It also helps angel investor networks and their associations attract new investment and support the growth of angel investment funds. Eligible recipients include southern Ontario nonprofit angel investor networks, nonprofit organizations that represent angel investor networks, and start-up businesses with fewer than 50 employees who have an investment agreement with recognized angel and/or venture capital investors. Eligible activities include product and process applied research, engineering design, technology development, product testing, marketing studies, certification, proof of concept, piloting and demonstration, problem solving, and commercialization of intellectual property. Technology Development Program. The Technology Development Program helps research and innovation organizations, the private sector, post-secondary institutions, and nonprofit organizations work together to accelerate the development of technologies that will result in new market opportunities for southern Ontario businesses. Eligible recipients include southern Ontario nonprofit organizations, such as innovation and commercialization organizations, and southern Ontario post-secondary institutions. Applied Research and Commercialization Initiative. The Applied Research and Commercialization Initiative is designed to address the gap between research and commercialization in southern Ontario by encouraging collaboration between SMEs with pre-market needs and post-secondary institutions with applied research expertise. The goal of the initiative is to accelerate innovation and to improve productivity and competitiveness for businesses located in southern Ontario. Eligible applicants include post-secondary institutions, where SMEs are the primary beneficiary. Canada also sees the lack of venture capital as a challenge to the country’s innovation and manufacturing capabilities. To increase the formation and usage of venture capital and encourage innovation, the Canadian government proposed $400 million in venture capital funds in its 2012 budget. The purpose of the venture capital funding is to support early stage risk capital, and to support the creation of large-scale venture capital funds led by the private sector. Canada’s Trade Commissioner Service provides several key export support services to support manufacturers, including: Preparation for international markets. Trade Commissioner Service offices in Canada help exporting businesses determine whether they are internationally competitive, decide on target markets, collect market and industry information, and help businesses improve their international business strategy. Market intelligence and strategy. Trade Commissioner Service representatives help businesses determine the level of opportunity that exists in a particular market, appropriate approaches to the market and the amount of effort and resources required by providing up-to-date information on barriers and regulations associated with entering a specific region, as well as information on upcoming opportunities or emerging trends. Trade Commissioner Service provides practical advice in areas such as navigating business and cultural practices, local representation, market entry strategies and participation in global value chains. Provision of qualified contact information. The Trade Commissioner Service provides exporters with business contacts that include potential buyers and partners, financial and legal professionals, technology sources, manufacturers, foreign regulatory authorities, and foreign investment promotion agencies. Advice to address market access challenges. The Trade Commissioner Service can advise on market access problems and other business challenges including customs clearance and shipping, unfair business treatment, contract bidding, storage and warehousing, insurance coverage and claims, and overdue accounts receivable. According to the Trade Commissioner Service, agency staff provided these services to over 14,000 clients from April 2011 to March 2012. Trade Commissioner Service officials told us that many of the clients are manufacturers. Canada also has some programs that provide a framework for skills training related to manufacturing. For example, Canada’s Interprovincial Red Seal Program, established in the 1950s, sets national standards for certification of excellence in 55 skilled trades, some of which are within the manufacturing sector. The Red Seal Program provides: National definition of competency. A national occupational analysis, developed for each Red Seal trade, identifies all the tasks performed in the trade and is used as a base document for the development of interprovincial standard examinations. The provinces and territories are encouraged to use the analysis for curriculum development. Endorsement of advanced skills. Through the program, apprentices who have completed their training and become certified journeypersons are able to obtain a Red Seal endorsement on their provincial or territorial Certificates of Qualification and Apprenticeship by successfully completing an interprovincial Red Seal examination. The Red Seal Program is administered by the Canadian Council of Directors of Apprenticeship, a body composed of the Director of Apprenticeship from each province or territory, and representatives of Human Resources and Skills Development Canada, a federal agency. In 2009, the council undertook an evaluation of the Red Seal Program, and determined that increased emphasis on specific, measurable, industry- defined standards and multiple forms of skills assessment would strengthen Canada’s apprenticeship system. Canada offers the following financial support to individuals pursuing certification in Red Seal trades: Apprenticeship grants. The Apprenticeship Incentive Grant is a taxable grant of $1,000 per year or level, up to a maximum amount of $2,000 per person. It is available to registered apprentices once they have successfully completed their first or second year/level (or equivalent) of an apprenticeship program in one of the Red Seal trades. The Apprenticeship Completion Grant is a taxable grant of $2,000 for registered apprentices who complete their apprenticeship training and obtain their journeyperson certification in a designated Red Seal trade. Cost allowance for tools. In addition to grants to assist with apprenticeship completion, the Tradesperson’s Tools Deduction is a federal tax policy that provides employed tradespersons with an annual deduction of up to $501 to help cover the cost of new tools necessary to their trade. The deduction applies to eligible tools if the total cost exceeds $1,096, and the purchase was made by an employed tradesperson. Germany established the Fraunhofer Institutes, a nationwide network of 60 applied research facilities with expert research staff, in 1949 as part of its efforts to rebuild Germany’s research infrastructure after World War II, according to Fraunhofer officials. Today, each Fraunhofer institute specializes in a particular subject matter. The 60 institutes are divided into the following categories of specialization: (1) materials and components; (2) microelectronics; (3) information and communications technology; (4) production; (5) light and surfaces, and (6) life sciences. Fraunhofer Institutes are co-located with universities. According to Fraunhofer officials, in 2012, the Fraunhofer network had a budget of $2.8 billion. Each individual Fraunhofer Institute’s funding is a mix of support from national and state government sources and private sector contracts for research. Applied research collaboration with the private sector on a contractual basis. Contracts or requests for work from private enterprises are the main way the institutes receive specific research tasks. For each research project, enterprises that have developed a process or technology for potential commercialization enter into a contract for applied research services with the Fraunhofer Institute that has the appropriate subject matter expertise. All Fraunhofer Institutes measure their performance by the number of contracts and the amount of revenue generated by contracts with the private sector. Flexible intellectual property. In some instances, part of intellectual property can be retained by Fraunhofer, and in some cases, intellectual property can be shared by a combination of Fraunhofer, the university, and the company involved. The technology that was eventually developed into MP3 music files originated in Fraunhofer research, and Fraunhofer currently holds several related patents. Germany also supports innovation through various industrial clusters around the country. The Spitzencluster program, the national government’s cluster initiative, is intended to strengthen clusters judged through a competitive application process to be the best clusters in the country. Spitzencluster competition winners are selected by an independent jury for demonstrating the ability to pursue strategic objectives in emerging industries identified in Germany’s High Tech Strategy 2020. Spitzencluster winners receive up to $51 million over 5 years, and the cluster’s participating entities, including businesses and universities, provide the remaining operating costs in a 50-50 cost share arrangement. To date, 15 clusters around the country have received the Spitzencluster designation and are conducting research in a variety of areas, including the following: Information and communication technology. The Cool Silicon cluster in the region around Dresden develops components and complex system solutions that significantly reduce the energy consumption of information and communications technologies systems. The cluster comprises more than 100 companies and research facilities and is also linked with the Technical Universities of Dresden and Chemnitz. In the long term, the cluster aims to become one of the world’s leading locations for energy efficiency in electronics. Logistics. In addition to developing North Rhine-Westphalia’s leading global position in the field of logistics, the EffizienzCluster aims to establish itself as a center for the innovative design of high-quality logistics services. There are approximately 120 companies and 11 research institutes working in the cluster. These include the Fraunhofer Institute for Material Flow and Logistics, the Technical University of Dortmund, and large and medium-sized enterprises. Electric car engineering. In the Electric Mobility South-West cluster, partners from the fields of automotive engineering, energy and supply engineering, information and communications technologies and services, as well as the cross-section field production engineering are working on new concepts for electric mobility. The cluster’s main projects include the design of battery production systems. Located in the Karlsruhe-Mannheim-Stuttgart-Ulm region, the cluster links 80 key players from industry, universities and research institutes, international companies, and SMEs. Germany’s High Tech Strategy also emphasizes the importance of creating more innovation opportunities for SMEs. Germany has established several programs recently to encourage SME innovation. These programs include the Central Innovation Program for SMEs, which connects SMEs directly with technical advisors; the HighTech Grunderfonds program, which provides venture capital; and the Signo program, which provides assistance in filing for patent rights. Technical advisory services. The Central Innovation Program for SMEs is the German government’s largest program that supports innovation in SMEs by reducing the technological and economic risks of R&D projects. The program includes 100 technical advisors that help SMEs with eligible projects submit grant applications. Eligible projects are proposals for new products or processes that show high potential for commercialization, and contain technical risk. SMEs are matched with public or private nonprofit research institutions to conduct necessary research and development to explore possible ways to develop the new product. Initiated in 2009 and currently planned to continue through 2014, the program has an annual budget of $643 million and gives out about 4,000-5,000 project grants every year. Each grant covers 50 percent of the project costs, with the SME supplying the remaining 50 percent. At the close of each grant period, SMEs submit a report that details successes, challenges, and lessons learned in the process of attempting to develop the new product. Venture capital funding. The HighTech Grunderfonds program is a public-private venture capital fund that uses its annual budget of $411 million to provide support in amounts up to $643,000 to innovative start-up companies. Assistance with filing for patent rights. Signo provides federal assistance to SMEs in securing intellectual property for innovative products. The program provides support to SMEs in completing national and international patent applications. In 2012, the Signo program had a budget of $22 million. Germany’s Association of Chambers of Commerce and Industry manages the country’s largest export promotion effort. The national association brings together a network of 80 local chambers of commerce, which represent the interest of all commercial enterprises, with a focus on small businesses. The Association of Chambers of Commerce and Industry and its funding agency, the Ministry of Economics and Technology, foster export activities in two main ways: Selectively establishing partnerships abroad. To establish partnership offices in foreign countries, the Association of Chambers of Commerce and Industry works through Chambers of Commerce abroad and meets with relevant stakeholders to determine whether German companies have sufficient interest in establishing an office. Currently, there are German chambers of commerce in 85 countries around the world. Annually, the Association of Chambers of Commerce and Industry has a budget of $219 million to manage the network of local chambers of commerce in Germany, as well as the partnership offices worldwide. Providing assistance for trade fair attendance and participation in trade delegations. According to German officials, The Ministry of Economics and Technology provides funding for businesses to participate in trade fairs around the world. The program supports SMEs in particular, but larger companies are not excluded from the program. Annually, assistance is provided for participation in over 200 trade events. The trade fair program has an annual budget of $54 million. Germany’s apprenticeship system—also referred to as the dual training system—provides post-secondary vocational education and training for students who wish to learn a skilled trade, and has several major features: Combined on-the-job training and classroom instruction, according to nationally defined standards. The program requires a combination of on-the-job-training and classroom instruction leading to certification in any of Germany’s 350 nationally recognized occupations. At the national level, industry associations and government officials negotiate the standards for certification as skilled in each occupation. At the state level, classroom instruction is formulated for each occupation. Through the Federal Institute for Vocational Education and Training, the national government coordinates with industry groups to obtain input on the types of skills and training necessary, and then structures apprenticeships to meet those needs. According to German officials, this coordination was established by the Vocational Education Reform Act of 1969 (as amended in 2005). The executive board of the Federal Institute for Vocational Education and Training includes representatives from German unions, employers’ associations, federal agencies, and state governments. Nationally recognized credential. As a result of the coordination at the national and state levels, the credentials obtained through the apprenticeship program are recognized by businesses and jurisdictions across the country. Public-private cost sharing. Employers pay apprentices a contractually agreed-upon stipend, with national unions and employer associations agreeing on the base apprentice wage in each occupation. According to officials from the Federal Institute for Vocational Education and Training. German businesses spent about $31 billion on apprenticeship programs in 2010. According to Ministry of Economy, Trade, and Industry officials, the initial impetus for the Next Generation Vehicle (NGV) Program—a component of its Green Growth strategy—was to reduce carbon dioxide emissions and high energy prices. More recently, the 2011 Fukushima disaster has focused attention on the need to reduce reliance on nuclear energy without increasing imports of petroleum or natural gas. Under the NGV program, private firms will manufacture the vehicles, but the Japanese national government plays the following roles: Setting targets for alternative-fuel vehicle diffusion. The program sets a target for NGVs to comprise between 20 and 50 percent of passenger car sales by 2020, and between 50 and 70 percent by 2030. The program also sets a target of 2 million normal chargers and 5,000 quick chargers by 2020. Providing subsidies to encourage purchase of NGVs. The national government covered half of the price gap between the cost of an electric vehicle and that of a gasoline model, up to approximately USD $12,500 since 2009 for electric vehicles, plug-in hybrid vehicles, and clean diesel vehicles. It also provided a temporary tax reduction of between 50 and 75 percent on acquisition tax and tonnage tax. The national government also subsidizes half the price of installation for the charging stations. Establishing international standards for battery performance and charging systems.influence the setting of international technological standards, among competing models, for various components of the NGV, including battery performance and chargers. Japan’s government is actively seeking to Japan’s national government works with regional and local governments, universities, and/ or the private sectors in the following areas: Conducting R&D for battery technology. The national government collaborates with universities on related basic R&D, such as a mechanism to analyze chemical response of batteries. However, for development of technology that will be commercially available in the near future, such as research on battery performance, the national government shares up to two-thirds of the cost with the private sector. Supporting related infrastructure. As part of its Green Growth Strategy, the private sector and local governments play a role in installation and operation of quick chargers. For example, car dealers offer memberships for quick chargers, and some local municipalities install quick chargers in their buildings and allow free use. According to Japanese national government officials, their role may include (1) developing the infrastructure for these chargers by contracting a private sector operator to form a network of charging stations; (2) developing a mechanism to charge users; (3) creating continuity in charger fees; and (4) ensuring payback to those providing these charger stations to accommodate the mix of actors and business models –some charging stations are free, others involve fees—and to ensure that charging stations are located in critical areas, such as along popular routes to rural/ resort areas. According to Japanese national government officials, they plan to dedicate about 80 percent of the fiscal year 2012 budget to the subsidies that cover half of the price gap between the cost of an electric vehicle and that of a gasoline model and for charger installation. The remainder will fund the development of advanced batteries. The New Energy and Industrial Technology Development Organization (NEDO) is Japan’s largest public agency that promotes R&D.promote the development and introduction of new technologies. Its programs and projects include: (1) promotion of research and development of energy, environmental and industrial technologies; (2) development, demonstration and introduction of promising technologies that private sector enterprises cannot transfer to the practical application stage by themselves due to the high risk and long development period It aims to required; and (3) project management to assist in carrying out private sector projects. NEDO typically follows R&D priorities set by Japan’s national government, but it also works with industry and universities to identify current trends and needs. It selects participants for its projects through a public solicitation process. Typically, NEDO connects university researchers and industry to collaborate on joint research. It may divide a project into different parts and assign responsibility for each part to a consortium of companies and/or universities. Research is conducted by the companies or universities, and project participants retain intellectual After a project property, such as patents, resulting from their research. is complete and technologies are developed, project participants are mainly responsible for commercialization. NEDO facilitates commercialization by coordinating research undertaken by the government and researchers and connecting relevant entities with NEDO, among others, is working potential users, according to officials. to create a stronger link between R&D spending and profits. According to NEDO officials, its economic analysis illustrates a positive return on investment. For example, according to NEDO officials, annual product sales resulting from 50 outstanding projects have resulted in a return of seven times the initial investment. However, according to NEDO officials, they face challenges measuring concrete returns to taxpayers and in isolating NEDO’s contribution. Therefore, they cannot claim that these returns are the direct result of their efforts. NEDO is public management agency; it does not have any research facilities of its own. According to Japanese government documents, efforts to integrate R&D across ministries and agencies with extensive collaboration between industry, academia and the government are part of Japan’s national comprehensive strategy. NEDO has overseas offices in Beijing, Bangkok, New Delhi, Paris, Washington, D.C., and Silicon Valley. NEDO officials told us that these offices exist largely to conduct research on trends in other countries and to create partnership opportunities with foreign research institutes as well as to maintain and advance Japan’s position as a global leader. In fiscal year 2012, approximately 90 percent of NEDO’s funding was used to develop technology for national R&D projects, including new manufacturing technology, new energy, nanotechnology, and materials, among others, according to NEDO officials. Less than 3 percent was used to fund commercialization and practical application activities, including technology innovation for small businesses. Other funding went to financial support for young researchers and activities related to the Kyoto Protocol Mechanisms. According to NEDO officials, NEDO typically provides funding of up to $12.5 million for 5-year projects. The Technology Advanced Metropolitan Area (TAMA) Association is a regional cluster that supports local manufacturers by matching them with national, regional, and local interests to improve R&D and commercialization of technology and products. It is one of 18 clusters in METI’s Industrial Cluster Project, which aims to strengthen over 10,000 regional SMEs and promote industrial clusters throughout Japan. The TAMA Association is a membership organization with approximately 300 manufacturing companies that focus on advanced technological and design capabilities or process technologies. In addition, it includes approximately 300 organizations and individuals that support innovations, including universities, financial institutions, local governments, and industry groups. The TAMA Association provides its members with assistance services in the following areas, among others: Creation of networks between large and SME manufacturers. The TAMA Association produces a technical report that includes summaries of technology for each member company to help connect SMEs to larger manufacturers seeking the technology in which it specializes. After viewing the report, the large manufacturers contact the SMEs to improve existing R&D or produce product samples. According to one TAMA Association official, 170 SMEs have been hired for projects by large companies as a result of these activities in 2011. Promotion of cooperation in R&D between manufacturers and universities. The association supports R&D to promote cooperation between manufacturers and universities. For example, a manufacturer in need of a particular type of R&D would hire the TAMA Association to connect the company to university researchers conducting R&D in that field. The TAMA Association also develops a network between SMEs, financial institutions, and research institutions to help a company develop a product or technology. Support for creation of new businesses and technologies. The TAMA Association relies on a network of about 150 experts, including consulting engineers and SME specialists, who support the creation of new businesses and development of new technologies by manufacturing companies. The TAMA Association helps SMEs apply for national government grants, and it works with financial institutions to provide funding for SME projects, according to a TAMA Association official. In addition, the association provides support to manufacturers that aim to enter a new field of business or commercialize technologies, including assistance with formulating business plans and acquiring business partners. Development of marketing channels and overseas operations. The TAMA Association works with experts to cultivate new markets for the region’s technologies—in part through its overseas branches in South Korea, China, and Taiwan and its affiliates in Germany, Italy, Singapore, and the United States—to develop a network and products that assist manufacturers with formulating marketing strategies. In addition, TAMA works with SMEs to acquire patents for their technologies and advise companies on how to protect their intellectual property rights. According to a TAMA Association official, the organization has successfully assisted with 500 cases to commercialize products within the past 15 years. Securing and fostering human resources. Among other professional development projects, the TAMA Association accepts personnel to help other organizations learn how to better support local businesses. For example, local government or private sector officials may work for the TAMA Association for a 2-year period. Japan’s Public Industrial Technology Research Institutes, or Kohsetsushi Centers, provide Japanese SME manufacturers with a range of services including technology guidance; technical assistance and training; networking; testing, analysis, and instrumentation; and access to open laboratories and test beds. They typically offer technical consultation services free of charge. Kohsetsushi Centers support Japanese SME manufacturers in adopting emerging technologies, including nanotechnology and robotics, among others. According to one Kohsetsushi official, all centers are geared toward supporting local industry; they do not specialize in particular industries. The Kohsetsushi Centers are generally funded and managed by local prefectures but are operated under the guidance of the Ministry of Economy, Trade, and Industry. There are more than 180 centers throughout Japan—at least one in each of Japan’s 47 prefectures—and more than 6,000 staff. The Tokyo Metropolitan Industrial Technology Research Institute (TIRI) is one of the three largest Kohsetsushi centers in Japan. With a staff of about 275, it serves about a quarter of Tokyo’s 40,000 manufacturers across three locations, primarily by providing services and information to SMEs, according to one TIRI official. TIRI serves not only SME manufacturers but SMEs in other industries, as well. In addition, larger enterprises may use TIRI’s services, but they pay more for some services than SMEs. It is funded primarily by the Tokyo prefectural government, but 5 percent of its funding comes from the national government and another 8 percent from user fees. TIRI’s main support services include the following: Technical assistance. TIRI offers technical assistance including consultations and testing services and certifies test results. TIRI consultants provide advice and answer inquiries about technical problems free of charge by telephone or in person. In addition, TIRI tests products and parts to ensure they conform to industrial standards and provides non-standard testing upon request. TIRI conducts approximately 100,000 tests each year and has over 40 pieces of testing equipment for tests including temperature, acoustics, electromagnetism, humidity, voltage, vibration, impact, corrosion, and noise tolerance. For example, one TIRI official told us that a vacuum cleaner manufacturer might use the acoustic testing room to measure the decibel level of its products, and electronics manufacturers might test the ability of their products to withstand an electromagnetic shock (see Figure 4). According to one TIRI official, TIRI typically charges a nominal fee for testing services, but not for consultation. It also performs customized measuring and analysis services according to customer needs. Product development. TIRI supports product development through rental laboratory space available 24 hours a day, experimental facilities and environmental testing equipment for shared use, and customer development support. TIRI also supports commercialization by supporting planning and design—such as branding for SME products—and prototyping of products using 3D printing technology. R&D. TIRI supports R&D, including basic, joint, and commissioned research. TIRI plans and implements basic research independently to develop new technology or to solve various challenges that SMEs face. The center also conducts joint research with SMEs for product and technology development. For example, small manufacturers often send one or two of their staff members to work on Kohsetsushi Center projects. This provides opportunities for company research personnel to gain research experience, develop new technical skills, and transfer technology back to their firms. TIRI also conducts commissioned research for which it receives external grants from the national government and other organizations. According to a TIRI official, the center primarily follows the R&D priorities and policies set by the prefecture and TIRI, but universities and companies also contribute to that agenda. About 24 percent of TIRI’s research is basic, 62 percent is commissioned, and 14 percent is joint. Technical training. TIRI has classrooms for seminars on product design and courses, for which students pay a fee, on new technology, industry trends, and internationalization. For example, officials showed us door knobs for home use that they helped one SME design. It offered this particular client assistance on usability, suggesting that they tilt the knob head to make opening the door easier. It also develops curricula and holds seminars to respond to needs from individual businesses or particular industry groups. Collaboration between industry and academic researchers. TIRI offers various services for connecting industry, academia, and public institutes. For example, it has instituted the Tokyo Innovation Hub to facilitate networking among SMEs and to promote cooperation between SMEs, universities, and research institutions. It also supports activities to encourage collaboration among more than 20 associations through its Cross-industrial Association. Technology management. TIRI offers support for technology management, including seminars and on-site technical support to assist with strategic development and technology management techniques. It supports the development of new products that utilize TIRI patents. It also provides information to SMEs related to international standards compliance for clients who export products or enter into foreign markets. TIRI tracks 12 organizational targets to measure its performance, including the number of patents acquired by TIRI and its partners, the number of products commercialized, and the number of licenses granted. Furthermore, it conducts a satisfaction survey of users each year. According to TIRI officials, these surveys illustrate that the center has successfully provided customer service but they face challenges with R&D output. However, they said that, generally speaking, their testing services have been highly appreciated by customers. Along with two other centers in South Korea, the Daedeok Innopolis (Daedeok) is an innovation cluster that consists of universities, research institutes, government and government-invested institutions, corporate research institutes, and venture corporations. It receives funding from the national government—the Ministry of Trade, Industry, and Energy —but also generates revenue from private sector users. Focused on commercializing technology, Daedeok is the only science park in South Korea, according to officials. It has five separate zones, each with its own specialized field, including a research complex, and other research institutes; an area for hi-tech firms; and an area for traditional manufacturing industries. Various entities located within Daedeok have developed technologies that have been popular in the marketplace, including 4th generation (4G) mobile technology, and according to officials, the lithium ion battery, and a nuclear research reactor that put South Korea at the forefront of these technologies. The Daejeon Technopark (Daejeon) focuses on the business aspects of a research institute, including growing existing SMEs and supporting R&D in the information technology, nanotechnology, robotics, and mechatronics industries. It was established in coordination with Daejeon city and the national government as an outgrowth of the Daedeok Innopolis, and it came into existence when it was designated as a special R&D zone. While Daedeok focuses on technologies with great commercial potential, the Technopark focuses on smaller scale companies and more routine assistance, such as marketing, according to Daedeok officials. Daedeok and Daejeon both provide support in the following areas for businesses that use their services: Commercialization. Daedeok networks with other regions and support organizations to promote its research results. Daejeon functions as a network hub between industry, academia, research institutes, and local government. For example, they connect SMEs to researchers or universities working on related research and provide SMEs access to technology. Intellectual property rights protection. Daejeon has a center dedicated to intellectual property through which they conduct research on existing technologies and trend analysis, support domestic and international applications, train companies on intellectual property protection, and support intellectual property planning for Daejeon City, among other things. Daedeok manages unused patents and evaluates new technology to match technology suppliers to potential customers. Technology sharing. For a fee, Daedeok provides facilities for various types of testing on product prototypes, such as electrical, temperature, or acoustic. Daejeon provides companies access to technology along with business expertise for consultation for SMEs. For example, during our visit, we observed testing of an electronic collection system (the technology used in “EZ Pass”-type cards) to see how it would perform under various speeds. Training. Daejeon provides a variety of training services for companies, both for new employees and continuing education on cutting-edge technology. Daedeok also provides training for start-up SMEs. In addition to these services, Daedeok assists SMEs through grants and one-on-one consulting services, such as matching technology suppliers and customers, providing design services, and evaluating new technologies. Daedeok’s R&D expenditures comprise 15 percent of the country’s total R&D expenditures, according to officials. It houses 30 research institutes, 5 universities, and approximately 1,000 companies (mostly SMEs). The area is responsible for developing approximately 40,000 patents. For government-designed projects at Daejeon, the private sector typically shares 20 percent of the costs, according to Daejeon officials. Currently, there are about 2,000 high-tech firms, according to Daejeon officials. One official noted that an assessment of Daejeon’s equipment centers revealed that provide access to SMEs revealed that these centers are only being used at 40 percent of their capacity. The official said that the usage rate is not an area for concern because these centers are relatively new. The official also emphasized that the center responds to the expressed need from over 250 area companies. In addition, the official noted that with a new president taking office, Daejeon could face challenges if policy changes occur, but the official opined that policy changes are likely to be minor. The Electronics and Telecommunications Research Institute (ETRI) is a global information technology research institute that works with the national South Korean government, the private sector, and universities to develop risky technology that the private sector is not willing to develop. It is the largest government-funded research institute in South Korea. ETRI, whose headquarters is located within the Daedeok Innopolis, collaborates with various institutes around Daedeok, such as research institutes focused on chemicals, energy, and satellites. It also conducts some joint projects with foreign universities, including some in the United States, and the private sector. These projects include research, technology transfer, research and business development, and training for foreign countries in areas such as information and communications technology. ETRI is one of approximately 25 research institutes located within the Ministry of Science, ICT, and Future Planning. In calendar year 2012, the institute planned to spend almost 80 percent of its budget on government commissioned projects.year and employs approximately 2,000 people, the majority of whom are engineers. ETRI conducts more than 500 projects each ETRI is responsible for developing core information technology inventions, including 4th generation mobile technology, specialized handheld televisions, and a cancer diagnosis bio-chip for home use. The typical length for each project is between 3 and 5 years. ETRI measures success by the number of patent applications, amount of royalty income, number of international/domestic standards contributions, and the number of publications in science journals, according to one ETRI official. In 2012 and 2013, ETRI was ranked number one with the highest patent activity by an intellectual property trade journal, which cited approximately 540 patents in 2012 and approximately 700 patents in 2013. The trade journal measures overall strength of patent portfolio holdings based on a combination of quality and quantity indicators, such as patents issued and science and research strength. The Korea Trade-Investment Promotion Agency (KOTRA) is the national implementing agency for Korea’s trade and investment goals and policies set by the Ministry of Trade, Industry & Energy. It facilitates South Korea’s economic development through various trade promotion activities, such as overseas market surveys and business matchmaking. It operates programs in: Intellectual property rights protection. KOTRA provides information about intellectual property right laws to firms that operate outside of South Korea. Overseas marketing. KOTRA develops and updates marketing strategies for South Korean products by the region and industry, and provides support to firms for their participation in exhibitions hosted overseas. KOTRA has also developed a “KOTRA global brand” program to support brand value to promote confidence in South Korean products that may be less familiar in overseas markets. KOTRA also hosts a global trade show to promote South Korea’s major export products. SME support. KOTRA has more than 100 Korea Business Centers in approximately 80 countries that function as incubation centers. KOTRA also connects SME exporters to logistics companies— companies that help manage the flow of resources—in more than 20 major cities and regions worldwide to lower logistics costs for more than 2,000 South Korean SMEs. Andrew Sherrill, (202) 512-7215 or sherrilla@gao.gov. Lawrance L. Evans, Jr., (202) 512-4802 or evansl@gao.gov. In addition to the contact named above, Laura Heald (Assistant Director), Kim Frankena (Assistant Director), Jaime Allentuck, Mark Glickman, and Cristina Ruggiero made key contributions to this report. In addition, key support was provided by James Bennett, David Chrisinger, Adam Cowles, Alexander Galuten, Jose A. Gomez, Ernie Jackson, John Lack, Kathy Leslie, Ashley McCall, Jean McSween, and Susan Offutt. Export Promotion: Better Information Needed about Federal Resources. GAO-13-644. Washington, D.C.: July 17, 2013. Export Promotion: Small Business Administration Needs to Improve Collaboration to Implement Its Expanded Role. GAO-13-217. Washington, D.C.: January 30, 2013. Science, Technology, Engineering, and Mathematics Education: Strategic Planning Needed to Better Manage Overlapping Programs across Multiple Agencies. GAO-12-108. Washington, D.C.: January 20, 2012. National Export Initiative: U.S. and Foreign Commercial Service Should Improve Performance and Resource Allocation Management. GAO-11-909. Washington, D.C.: September 29, 2011. Small Business Innovation Research: SBA Should Work with Agencies to Improve the Data Available for Program Evaluation. GAO-11-698. Washington, D.C.: August 15, 2011. Department of Commerce: Office of Manufacturing and Services Could Better Measure and Communicate Its Contributions to Trade Policy. GAO-11-583. Washington, D.C.: June 7, 2011. Factors for Evaluating the Cost Share of Manufacturing Extension Partnership Program to Assist Small and Medium-Sized Manufacturers. GAO-11-437R. Washington, D.C.: April 4, 2011. America COMPETES Act: It Is Too Early to Evaluate Program’s Long- Term Effectiveness, but Agencies Could Improve Reporting of High-Risk, High-Reward Research Priorities. GAO-11-127R. Washington, D.C.: October 7, 2010. America COMPETES Act: NIST Applied Some Safeguards in Obtaining Expert Services, but Additional Direction from Congress Is Needed. GAO-09-789. Washington, D.C.: August 7, 2009. Export Promotion: Increases in Commercial Service Workforce Should Be Better Planned. GAO-10-874. Washington, D.C.: August 31, 2010. Best Practices: DOD Can Achieve Better Outcomes by Standardizing the Way Manufacturing Risks Are Managed. GAO-10-439. Washington, D.C.: April 22, 2010. International Trade: Observations on U.S. and Foreign Countries’ Export Promotion Activities. GAO-10-310T. Washington, D.C.: December 9, 2009.
Over the last decade, the United States lost about one-third of its manufacturing jobs, raising concerns about U.S. manufacturing competitiveness. There may be insights to glean from government policies of similarly-situated countries, which are facing some of the same challenges of increased competition in manufacturing from developing countries. GAO was asked to identify innovative foreign programs that support manufacturing that may help inform U.S. policy. Specifically, GAO examined (1) government strategies and programs other advanced economies have implemented to approach issues similar to those facing U.S. manufacturing, and (2) the key distinctions between government approaches to support manufacturing in other advanced economies and those in the United States. Based on input from experts and federal officials, and an analysis of manufacturing programs in other advanced countries, GAO selected Canada, Germany, Japan, and South Korea for study. In each country, GAO interviewed program officials and reviewed documents describing their programs. To identify distinctions between foreign and U.S. approaches to supporting manufacturing, GAO researched comparable programs in the United States, and interviewed staff administering those programs. GAO is not making any recommendations in this report. GAO received only technical comments on this report from federal agencies. The four countries GAO analyzed--Canada, Germany, Japan, and South Korea--offer a varied mix of programs to support their manufacturing sectors. For example, Canada is shifting emphasis from its primary research and development (R&D) tax credit toward direct support to manufacturers to encourage innovation, particularly small- and medium-sized enterprises (SMEs). Germany has established applied institutes and clusters of researchers and manufacturers to conduct R&D in priority areas, as well as a national dual training system that combines classroom study with workplace training, and develops national vocational skills standards and credentials in 350 occupations. Japan has implemented science and technology programs--with a major focus on alternative energy projects--as part of a comprehensive manufacturing strategy. South Korea has substantially expanded investments in R&D, including the development of a network of technoparks--regional innovation centers that provide R&D facilities, business incubation, and education and production assistance to industry. When compared to the United States, the countries in GAO's study offer some key distinctions in government programs to support the manufacturing sector in the areas of innovation, trade, and training. While the United States and the other four countries all provide support for innovation and R&D, the foreign programs place greater emphasis on commercialization to help manufacturers bridge the gap between innovative ideas and sales. These include programs that support infrastructure as well as hands-on technical and product development services to firms, and that foster collaboration between manufacturers and researchers. In contrast, the United States relies heavily on competitive funding for R&D projects with commercial potential. Within trade policy, the United States and the four countries in GAO's study provide similar services, but there are several differences in how they are delivered. For example, the United States is an acknowledged leader in intellectual property protection, but the U.S. government plays a less prominent role than the Japanese government in developing technological standards on industrial products. A key difference related to training programs pertains to the sustained role of government in coordinating stakeholder input into a national system of vocational skills training and credentialing, which helps provide a supply of skilled workers for manufacturers. This was particularly evident in Germany. In contrast, the United States largely devolves vocational training to states and localities and does not have a national system to issue industry-recognized credentials. However, the U.S. manufacturing industry, with participation from the federal government, has recently launched an effort to establish nationally portable, industry-recognized credentials for the manufacturing sector. Overall, GAO's analysis shows the broad extent to which four countries who are U.S. competitors are leveraging the public sector to help their manufacturing industries maintain competitiveness in a rapidly changing global economy.
Title IX prohibits discrimination on the basis of sex in any education program or activity, including intercollegiate athletics, at colleges receiving federal financial assistance. The Department’s OCR is responsible for enforcing federal civil rights laws as they relate to schools, including title IX. In fiscal year 1995, OCR operated on a $58.2 million appropriation and with 788 full-time-equivalent staff. Federal regulations implementing title IX became effective in 1975 and specifically required gender equity in intercollegiate athletics. The regulations gave colleges a 3-year transition period (through July 21, 1978) to comply fully with the regulations’ requirements that equal athletic opportunity be provided for men and women. In 1979, OCR issued a Policy Interpretation providing colleges with additional guidance on what constituted compliance with the gender equity requirements of title IX. Under the Policy Interpretation, OCR applies a three-part test to help determine whether colleges provide equal athletic opportunity to male and female student athletes. To help determine whether equal athletic opportunity exists, OCR assesses whether “intercollegiate level participation opportunities for male and female students are provided in numbers substantially proportionate to their respective enrollments”; whether, when “the members of one sex have been and are underrepresented among intercollegiate athletes . . . the institution can show a history and continuing practice of program expansion which is demonstrably responsive to the developing interests and abilities of the members of that sex”; or whether, when “the members of one sex are underrepresented among intercollegiate athletes, and the institution cannot show a history and continuing practice of program expansion, as described above . . . it can be demonstrated that the interests and abilities of the members of that sex have been fully and effectively accommodated by the present program.” Colleges must meet any one of the three criteria of the test. In addition to the three-part test, OCR may use other factors to assess equality of opportunity in intercollegiate athletics, including the financial assistance and travel expenses provided to student athletes, the degree of publicity provided for athletic programs, the extent to which colleges recruit student athletes, and the extent of opportunities to participate in intercollegiate competition. OCR also assesses coaches’ assignments and compensation insofar as they relate to athletic opportunity for students. OCR both investigates discrimination complaints and conducts compliance reviews. Compliance reviews differ from complaint investigations in that they are initiated by OCR. Moreover, compliance reviews usually cover broader issues and affect significantly larger numbers of individuals than most complaint investigations do, although some complaint investigations can be just as broad in scope and effect. OCR selects review sites on the basis of information from various sources that indicates potential compliance problems. OCR is authorized to initiate administrative proceedings to refuse, suspend, or terminate federal financial assistance to a school violating title IX. However, in the more than 2 decades since title IX was enacted, according to an OCR official, the Department has not initiated any such administrative action for athletic cases because schools have complied voluntarily when violations have been identified. In addition to OCR’s enforcement of title IX, the Department implements the Equity in Athletics Disclosure Act. Under the act, coeducational colleges offering intercollegiate athletics and participating in any federal student financial aid program are required to disclose certain information, by gender, such as the number of varsity teams, the number of participants on each team, the amount of operating expenses, and coaches’ salaries. This information must be reported separately for men’s and women’s teams, and colleges were to have prepared their first reports by October 1, 1996; thereafter, reports are to be prepared annually by October 15th. Colleges must make the information available to students, potential students, and the public. Reports are not required to be submitted to the Department, but copies must be made available to the Department upon request. NCAA is a key organization in intercollegiate athletics. It is a voluntary, unincorporated association that administers intercollegiate athletics for nearly 1,000 4-year colleges and universities. NCAA member colleges belong to one of three divisions, the specific division generally depending on the number of sports the college sponsors. Typically, colleges with the largest number of athletic programs and facilities belong to Division I, and those with smaller programs are in Division II or III. Division I schools are further divided into three categories, Divisions I-A, I-AA, and I-AAA, with those that have the larger football programs generally placed in Division I-A. OCR’s strategy for encouraging gender equity in intercollegiate athletics emphasizes both preventing title IX violations and investigating complaints, although it receives relatively few complaints about alleged violations. Principal elements of OCR’s preventive approach include issuing guidance and providing technical assistance. In addition, a National Coordinator for Title IX Athletics has been appointed to manage title IX activities. OCR also considers compliance reviews important to prevention but has conducted few of them in recent years. OCR issued its “Clarification of Intercollegiate Athletics Policy Guidance” in January 1996 in response to requests from the higher education community to clarify the three-part test criteria presented in the 1979 Policy Interpretation. The Policy Interpretation allowed colleges’ intercollegiate athletic programs to meet any one of the three criteria of the test to ensure that students of both sexes are being provided nondiscriminatory opportunities to participate in intercollegiate athletics. In 1994 and 1995, OCR initiated focus groups to obtain a variety of views on its title IX guidance on intercollegiate athletics. Comments from the focus groups indicated that clarification of the three-part test was needed. While OCR was developing the clarification, the Congress held hearings in May 1995, during which concerns were expressed that the three-part test was ambiguous, thus confirming the need for additional guidance. Subsequently, congressional members asked the Assistant Secretary for Civil Rights to clarify OCR’s policy on the three-part test. The resulting 1996 clarification elaborates upon each part of the three-part test of equal athletic opportunity, provides illustrative examples of its application, and confirms that colleges are in compliance if they meet any one part of the test. The clarification states that a college meets the first criterion of the test if intercollegiate participation opportunities are substantially proportionate to enrollments. Such determinations are made on a case-by-case basis after considering each college’s particular circumstances or characteristics, including the size of its athletic program. For example, a college where women represent 52 percent of undergraduates and 47 percent of student athletes may satisfy the first part of the three-part test without increasing participation opportunities for women if there are enough interested and able students to field and support a viable team. The second part of the test concerns program expansion. OCR’s clarification focuses on whether there has been a history of program expansion and whether it has been continuous and responsive to the developing interests and abilities of the underrepresented sex. The clarification does not identify fixed intervals of time for colleges to have added participation opportunities. To satisfy the second part of the test, a college must show actual program expansion and not merely a promise to expand its program. Under the third part of the test, a determination is made whether, among students of the underrepresented sex, there is (a) sufficient unmet interest in a particular sport to support a team, (b) sufficient ability to sustain a team among interested and able students, and (c) a reasonable expectation of intercollegiate competition for the team in the geographic area in which the school competes. To make its determination, OCR evaluates such information as requests by students to add a sport, results of student interest surveys, and competitive opportunities offered by other schools located in the college’s geographic area. Since fiscal year 1992, OCR has investigated and resolved 80 intercollegiate athletics complaints to which the three-part test was applied. Of these 80 colleges, 16 either demonstrated compliance or are taking actions to comply with part one; 4, with part two; and 42, with part three. The remaining 18 schools have yet to determine how they will comply because they are still implementing their settlement agreements. These agreements obligate the schools to comply with one part of the three-part test by a certain date, but OCR’s monitoring efforts do not yet indicate which part of the test they will satisfy. OCR provides technical assistance through such activities as participating in on-site and telephone consultations and conferences, conducting training classes and workshops, and disseminating educational pamphlets. For example, OCR staff conduct title IX workshops for schools, athletic associations, and other organizations interested in intercollegiate athletics. Although OCR could not tell us the total number of technical assistance activities it conducted specific to title IX in intercollegiate athletics, it did provide 47 examples of national, state, or local title IX presentations made between October 1992 and April 1996. OCR also coordinates title IX education efforts with NCAA. For example, the Assistant Secretary for Civil Rights spoke at an NCAA-sponsored title IX seminar in April 1995, and OCR representatives have participated in subsequent NCAA-sponsored seminars. The Assistant Secretary for Civil Rights created the position of National Coordinator for Title IX Athletics in 1994. According to the National Coordinator, who reports directly to the Assistant Secretary, this position was created to (1) improve the coordination of resources focused on gender equity in athletics among OCR’s 12 offices; (2) prioritize management of title IX activities; (3) ensure timely, consistent, and effective resolution of title IX cases and other issues; and (4) ensure all appropriate OCR staff are trained in conducting title IX athletics investigations in accordance with revised complaint resolution procedures. The National Coordinator told us the creation of the position has resulted in greater consistency in resolving athletics cases and faster responses from OCR offices to athletics inquiries. These improvements were accomplished, in part, by more frequent communication between the National Coordinator and OCR offices using a recently implemented national automated communications network, improved on-the-job training for OCR staff in case resolution, and the establishment of a central source of title IX athletics information. Although OCR investigates and resolves all intercollegiate athletics complaints that are filed in a timely manner, fewer than 100 such complaints were filed between October 1991 and June 1996. These complaints represented 0.4 percent of all civil rights complaints filed during that period (see table 1). Most of the approximately 23,000 complaints filed with OCR during that period dealt with other areas of civil rights, including disability, race, and national origin. OCR’s title IX activities have focused recently more on policy development, technical assistance, and complaint investigations and less on assessing schools’ compliance with title IX through compliance reviews. Although its strategic plan emphasizes the value of conducting OCR-initiated compliance reviews to maximize the effect of available resources, it conducted only two such reviews in 1995 and none in fiscal year 1996, and it plans none in fiscal year 1997. OCR attributes this decline to resource constraints. As table 2 shows, OCR conducted 32 title IX intercollegiate athletics compliance reviews during fiscal years 1992 through 1996, with the largest number being conducted in 1993. NCAA’s constitution charges it with helping its member colleges meet their legislative requirements under title IX. Following the 1992 NCAA Gender Equity Study, which showed that women represented 30 percent of all student athletes and received 23 percent of athletic operating budgets, NCAA created a task force to further examine gender equity in its member colleges’ athletic programs. NCAA has since implemented the following recommendations made by the task force. NCAA incorporated the principle of gender equity into its constitution in 1994. Recognizing that each member college is responsible for complying with federal and state laws regarding gender equity, the principle states that NCAA should adopt its own legislation to facilitate member schools’ compliance with gender equity laws. According to NCAA, the Athletics Certification Program, begun in academic year 1993-94, was developed to ensure that Division I athletic programs are accredited in a manner similar to the way academic programs are accredited. The certification process includes a review of Division I colleges’ commitment to gender equity. Schools are required to collect such information as the gender composition of their athletic department staff and the resources allocated to male and female student athletes. Schools must also evaluate whether their athletic programs conform with NCAA’s gender equity principle and develop plans for improving their programs if they do not. As of June 1996, NCAA reported that 70 of the 307 Division I schools (or 23 percent) had been certified. The remaining schools are scheduled to be certified by academic year 1998-99. The certification procedure takes about 2 years to complete and includes site visits by an NCAA evaluation team and self-studies by the schools. Schools not meeting certification criteria must take corrective action within an established time frame. Schools failing to take corrective action may be ineligible for NCAA championship competition in all sports for up to 1 year. If, after 1 year the school has not met NCAA’s certification criteria, it is no longer an active member of NCAA. According to NCAA, to date it has not been necessary to impose such sanctions on any school undergoing certification. NCAA’s 1992 gender equity study reported the results of a survey of its membership’s athletic programs. The study will be updated every 5 years, with the next issuance scheduled for 1997. To update the study, NCAA developed and distributed a form to collect information on colleges’ athletic programs. The data the form is designed to gather include the information schools must collect under the Equity in Athletics Disclosure Act. Thus, in addition to publishing its gender equity study, NCAA will be able to aggregate the data in reports prepared by colleges under the Disclosure Act. The deadline for submitting data collection forms to NCAA is the end of October 1996. To help schools achieve gender equity in intercollegiate athletics as well as to meet the interests and abilities of female student athletes, the NCAA Gender Equity Task Force identified nine emerging sports that may provide additional athletic opportunities to female student athletes. Effective September 1994, NCAA said that schools could use the following sports to help meet their gender equity goal: archery, badminton, bowling, ice hockey, rowing (crew), squash, synchronized swimming, team handball, and water polo. In academic year 1995-96, 122 of the 995 (or 12 percent) NCAA schools with women’s varsity sports programs offered at least one of the emerging sports. In 1994, NCAA developed a guidebook on achieving gender equity. The guidebook supplements OCR’s title IX guidance and provides schools’ athletic administrators with basic knowledge of the law and how to comply with it. NCAA also coordinates with OCR to provide its member schools—and others—training and technical assistance through title IX seminars. NCAA held two such seminars in April 1995 (the Assistant Secretary for Civil Rights participated in one of the seminars) and two in April 1996. The seminars were attended by athletic directors, general counsels, gender equity consultants, OCR representatives, and others representing groups interested in gender equity in intercollegiate athletics. States promote gender equity in intercollegiate athletics through a variety of means. Over half of the states were involved in promoting gender equity in intercollegiate athletics. To identify state gender equity initiatives, we surveyed state higher education organizations in all 50 states and the District of Columbia. For reporting purposes, we collectively refer to the 51 respondents as states. Overall, 32 of the 51 states (63 percent) had taken some type of action to promote gender equity in intercollegiate athletics. Information provided by the 51 respondents is summarized in table 3; appendix II discusses the responses in more detail. Some respondents also provided observations of conditions that they believe may facilitate or hinder gender equity in intercollegiate athletics at colleges within their states. Conditions that some believed may facilitate gender equity included a commitment from individuals in leadership positions, state gender equity legislation, and a high participation by girls in K-12 athletics. Conditions that some believed may hinder gender equity included insufficient funds; the presence of football programs, which women are unlikely to participate in; and the perception that women are not as interested in athletics as men are. The eight studies on gender equity in intercollegiate athletics that we identified showed that women’s athletic programs have made slight advances since 1992 toward gender equity as measured by the number of sports available to female students, the number of females participating in athletics, and the percentage of scholarship expenditures for women’s sports. The studies also show, however, that women’s programs remain behind men’s programs as measured by the percentage of female head coaches, comparable salaries for coaches, and ratio of student athletes to undergraduate enrollment. All eight studies were national in scope and examined gender equity in the athletic programs at NCAA-member schools since 1992. Although most of the studies used surveys, some studies were based on different sample sizes or time periods, making direct comparisons among studies inappropriate. While the studies selectively evaluated the effect of title IX on various aspects of gender equity in intercollegiate athletics, they did not evaluate schools’ compliance with title IX. See appendix III for additional information on the studies; see also the bibliography. The studies reported some advances toward equity between men’s and women’s intercollegiate athletics: The average number of sports offered to women rose from 7.1 in 1992 to 7.5 in 1996, an increase of almost 6 percent. Schools in all three NCAA divisions have added women’s programs in the last 5 years, which one study attributed to the implementation of title IX legislation. An almost equal number of women’s and men’s sports (about 4.5) used marketing and promotional campaigns designed to increase event attendance. In fiscal year 1993, women at NCAA Division I schools received about 31 percent of athletic scholarship funds, an increase of about 3 percentage points from fiscal year 1989. Similarly, women’s programs received 24 percent of total average athletic operating expenses, including scholarships, scouting and recruiting, and other expenses—also an increase of about 3 percentage points from fiscal year 1989. Female student participation in intercollegiate athletic programs has increased. For example, one study showed that the proportion of female student athletes increased from 34 percent of all student athletes in 1992 to 37 percent in 1995, an annual rate of increase of 1 percentage point. The studies also showed that women’s athletic programs continue to lag behind men’s programs in certain respects: Most of the head coaches for women’s teams are male. In 1996, women accounted for about 48 percent of head coaches for women’s teams. This represented a slight decline (0.6 percentage points) from the percentage of female coaches in 1992. In contrast, more than 90 percent of women’s teams were coached by females in 1972, the year title IX was enacted. Head coaches of women’s basketball teams earned 59 percent of what head coaches of men’s basketball teams earned, as reported in 1994. Women often constituted half of all undergraduates in 1995, while constituting only 37 percent of all student athletes. In commenting on a draft of our report, the Department of Education clarified several issues, including the reason compliance reviews have declined, the extent of OCR’s work with other agencies in support of title IX policies and procedures, the differences between compliance reviews and complaint investigations, and the context in which coaches’ employment is considered by OCR in a title IX review (see app. V). The Department also offered a number of technical changes. In general, we agreed with the Department’s comments, and incorporated them into the report, as appropriate. We are sending copies of this report to the Secretary of Education; appropriate congressional committees; the Executive Director, NCAA; and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report were Joseph J. Eglin, Jr., Assistant Director; R. Jerry Aiken; Deborah McCormick; Charles M. Novak; Meeta Sharma; Stanley G. Stenersen; Stefanie Weldon; and Dianne L. Whitman-Miner. To determine the actions the Department of Education has taken to promote gender equity in intercollegiate athletics since 1992, we interviewed the National Coordinator for Title IX Athletics and analyzed information from the Department’s Office for Civil Rights (OCR). We obtained information on the National Collegiate Athletic Association’s (NCAA) gender equity actions by interviewing its Director of Education Outreach, Director of Research, and officials in its Compliance Department. We also analyzed documentation they provided. To identify state gender equity initiatives, we developed a questionnaire and sent it to agencies with oversight responsibility for public higher education in each of the 50 states and the District of Columbia. In nearly all cases, we spoke with staff at the higher education agency. When necessary for clarification, we conducted follow-up telephone interviews. We supplemented this information with supporting documentation provided by state representatives. The questionnaire went to 56 organizations: 41 higher education boards or boards of regents, 9 state university or college systems, 5 community college systems, and 1 public 4-year institution. Five states had separate higher education oversight organizations for 2- and 4-year institutions. We therefore received two sets of responses from these states, one for 2-year and the other for 4-year institutions. We combined the two sets of responses into one response to reflect the state’s gender equity initiatives. We received completed surveys from all 50 states and the District of Columbia. The questionnaire requested data on the existence of state gender equity officials; type of gender equity initiatives, if any (that is, legislation, requirements, policy recommendations, or other actions); methods used to promote gender equity; indicators used to measure gender equity; actual or estimated trends for each indicator; compliance and guidance efforts associated with the Equity in Athletics Disclosure Act; and conditions that help or hinder gender equity within the state. All information was self-reported by state representatives, and we did not verify its accuracy. To identify studies on gender equity in intercollegiate athletics issued since 1992, we conducted a literature search and consulted academic experts and professional organizations that deal with gender equity, intercollegiate athletics, or both. (See app. IV for a list of organizations contacted for this report. We have also included a bibliography.) The sources we consulted identified eight studies on gender equity in intercollegiate athletics that were national in scope and were issued since 1992. Most of the studies were surveys of NCAA schools. We reviewed the information in the studies and summarized the key findings, but we did not verify their accuracy. We performed our work between April and August 1996 in accordance with generally accepted government auditing standards. This appendix contains the responses to questions we asked higher education officials in the 50 states and the District of Columbia (referred to in this appendix as 51 states) about gender equity in intercollegiate athletics efforts. All responses reflect statewide gender equity actions. Number and type of sanctions imposed Eight states responded to the question; the remaining 43 states did not use any indicators. The eight national studies we identified that were issued between 1992 and 1996 examined various aspects of gender equity within NCAA schools’ intercollegiate athletics programs. Because they varied in the time periods they studied, sample size, purpose, and methodology, the studies cannot be compared with each other. While some studies discuss the overall effect of title IX on women’s athletics, they do not present sufficient information to determine whether the colleges were in compliance with title IX. The following is a summary of the key findings of each study. Authors and Date of Study: Acosta and Carpenter (1996) Scope and Time Period Studied: All NCAA schools, academic years 1977-78 to 1995-96 Summary: This longitudinal study examined the number of sport offerings as an indicator of opportunities for women athletes to participate in intercollegiate athletics at NCAA schools. It also reported the percentage of NCAA schools offering each type of sports program. The study identified 24 sports that schools could offer to female students. The percentage of schools offering sports programs to female students in 1996 varied considerably by sport, ranging, for example, from 98.3 percent of schools offering basketball to 0.3 percent offering badminton. In addition, the average number of sports being offered to female intercollegiate athletes generally increased from 7.1 sports per school in 1992 to 7.5 sports in 1996, for all three NCAA divisions (see table III.1). The study noted that the average number of women’s sports offered in 1996 was the highest since this information was first reported in 1978. The average number of sports offered per school was also reported for each NCAA division for 1996: 8.3 (Division I), 6.1 (Division II), and 7.8 (Division III). The study also examined the percentage of female coaches and female administrators (head athletic directors) as two other indicators of participation opportunities for women at NCAA schools. The study found that, for women’s teams, the percentage of female coaches and female administrators were lower than percentages of male coaches and administrators. While figures for individual years fluctuated, they did not vary much between academic years 1992 and 1996 (see table III.2). The study also noted that the percentage of female coaches in 1996 was the second lowest representation level since title IX was enacted in 1972. By contrast, more than 90 percent of women’s teams were coached by females in 1972. Data not readily available. The study concluded that title IX has had more of a positive effect on participation opportunities for female student athletes than for female coaches and administrators. Authors and Date of Study: Barr, Sutton, McDonald, and others (1996) Scope and Time Period Studied: Members of the National Association of Collegiate Marketing Administrators at NCAA schools, 1996 Summary: The study of marketing and promotion of women’s programs involved a survey of members of the National Association of Collegiate Marketing Administrators. The study preliminarily concluded that NCAA schools and their marketing departments appeared to have good intentions in supporting women’s programs, but athletic departments were not adding the personnel needed to effectively market and promote women’s sports. The study reported the following: Women’s sports received 37 percent of schools’ mean athletic marketing budgets. This result was positively correlated with the overall athletic department budget allocated to women’s and men’s sports. The mean number of sports offered at NCAA schools was 9.2 for women and 9.2 for men. Given the relative equality of the two estimates, the study suggested title IX may have had a positive effect on the number of women’s sports being offered. Marketing and promotional campaigns designed to increase event attendance were used for an almost equal number of women’s sports (4.5) and men’s sports (4.6); however, the study did not indicate the attendance levels or whether they had increased as a result of marketing and promotional campaigns. Schools at each NCAA division level have added women’s programs in the last 5 years as a result of title IX legislation; the mean number of women’s programs added ranges from 1.0 to 3.5 sports per school. Within Division I-A, the method cited most frequently for deciding what programs to add was direction from an NCAA conference to its member schools to add specific sports. For Division I colleges with no football programs, the most frequent method was the elevation of an existing club sport to the intercollegiate level. Not many men’s sports programs have been dropped in the last 5 years: the mean number ranged from 0.1 to 1.0 per school. The most common reasons given for reducing men’s sports were to comply with title IX and to contain athletic programs’ costs. No women’s sports programs had a full-time staff member devoted to marketing their sports. Authors and Date of Study: NCAA (1995) Scope and Time Period Studied: All NCAA schools, academic years 1982-83 to 1994-95 Summary: Female student athlete participation rose from 34 percent of all student participation in 1992 to 37 percent in 1995, an increase of about 1 percentage point a year. Authors and Date of Study: USA Today (1995) Scope and Time Period Studied: NCAA Division I-A football schools, academic year 1994-95 Summary: The study assessed the effects of title IX on college campuses by surveying the 107 NCAA Division I-A schools. The responses for the 95 schools that replied showed the following: Women were, on average, 33 percent of student athletes and 49 percent of undergraduates. Female athletes received 35 percent of scholarships the schools provided. Forty percent of the schools added a women’s sport in the last 3 years. Fifty-nine percent of the responding schools planned to add at least one women’s sport in the next 3 years. Authors and Date of Study: Chronicle of Higher Education (1994) Scope and Time Period Studied: NCAA Division I schools, academic year 1993-94 Summary: The survey measured progress in achieving gender equity since the 1992 NCAA Gender Equity Study was issued showing disparities in the number of male and female student athletes and the amount of athletic scholarship money they received. The survey concluded that little had changed since the NCAA study was issued. It identified a slight increase in the proportion of female student athletes and their share of athletic scholarship funds; however, participation opportunities and scholarship funds continued to lag behind those for men, even though women constituted over half of the colleges’ undergraduates. Responses from 257 of the 301 NCAA Division I schools showed the following: Women made up about 34 percent of varsity athletes and about 51 percent of undergraduates. Female athletes received almost 36 percent of scholarship funds. Authors and Date of Study: NCAA (1994) Scope and Time Period Studied: All NCAA schools, fiscal year 1992-93 Summary: NCAA’s study of member schools’ expenses found that about 24 percent of the total average operating expenses went to women’s programs at Division I schools in fiscal year 1992-93 (see table III.3). Grants-in-aid (scholarships) Authors and Date of Study: American Volleyball Coaches Association (1995) Scope and Time Period Studied: Coaches at NCAA schools and schools belonging to other athletic associations or college systems that officially conduct intercollegiate volleyball programs, 1993 Summary: The survey gathered information on various aspects of coaches’ compensation, including that of head coaches, at NCAA schools and schools belonging to other athletic associations or college systems with intercollegiate volleyball programs. However, meaningful findings were derived only from NCAA Division I women’s intercollegiate volleyball programs. Response rates were lower for all the other schools with volleyball programs. Response rates were particularly low for men’s programs, precluding any comparisons between men’s and women’s programs. For women’s volleyball, the survey showed about 48 percent of head coaches were female, and their average base salary was $32,383, about 2 percent less than that earned by males coaching women’s volleyball. Authors and Date of Study: Women’s Basketball Coaches Association (WBCA) (1994) Scope and Time Period Studied: Head coaches at NCAA Division I schools who were WBCA members, 1994 Summary: The survey included an examination of head coaches’ salaries, employment contract terms, budgets, and staffing at NCAA Division I schools with basketball programs. Information for both men’s and women’s basketball programs was provided by the head coach of the women’s program. The results showed significant disparities between women’s and men’s basketball programs in the average base salary for the head coach, coaching contracts, and program budgets (see table III.4). For example, head coaches of women’s basketball earned 59 percent of what head coaches of men’s basketball earned, and women’s average annual athletic budgets were 58 percent of men’s budgets. The study also reported that men’s basketball programs employed more graduate staff and at higher average salaries than women’s programs. For women’s basketball programs, however, few differences were found in average base salary and contract terms for male and female head coaches. American Association of University Women, Washington, D.C. American Council on Education, Washington, D.C. American Sports Institute, Mill Valley, Calif. Boise State University, Boise, Idaho Center for Research on Girls and Women in Sport, University of Minnesota, Minneapolis, Minn. Council of Chief State School Officers, Washington, D.C. Eastern Oregon State College, LaGrande, Oreg. Education Commission of the States, Denver, Colo. Harvard School of Public Health, Cambridge, Mass. Moorhead State University, Moorhead, Minn. National Association for Girls and Women in Sport, Reston, Va. National Association of Collegiate Women Athletics Administrators, Sudbury, Mass. National Coalition for Sex Equity in Education, Clinton, N.J. National Softball Coaches Association, Columbia, Mo. National Women’s Law Center, Washington, D.C. Princeton University, Princeton, N.J. Smith College, Northampton, Mass. Trial Lawyers for Public Justice, Washington, D.C. University of California, Berkeley, Calif. University of Massachusetts, Amherst, Mass. Washington State University, Pullman, Wash. Women’s Educational Equity Act Publishing Center, Education Development Center, Inc., Newton, Mass. Women’s Institute on Sports and Education, Pittsburgh, Pa. Women’s Sports Foundation, East Meadow, N.Y. Young Women’s Christian Association, New York, N.Y. Acosta, Vivian R. and Linda Jean Carpenter. Women in Intercollegiate Sport, A Longitudinal Study, Nineteen Year Update, 1977-1996. Brooklyn, N.Y.: Brooklyn College, 1996. American Volleyball Coaches Association. 1992-1993 Survey, Women’s Volleyball Programs. Colorado Springs, Colo.: AVCA, 1995. Barr, Carol A., William A. Sutton, Mark M. McDonald, and others. Marketing Implications of Title IX to Collegiate Athletic Departments (preliminary report). Amherst, Mass.: University of Massachusetts, 1996. Blum, Debra E. “Slow Progress on Equity.” Chronicle of Higher Education (Oct. 26, 1994), p. A45. http://www.chronicle.com (cited Mar. 4, 1996). Cheng, Phyllis W. “The New Federalism and Women’s Educational Equity: How the States Respond.” Paper presented at the annual meeting of the Association of American Geographers, Phoenix, Ariz., 1988. Feminist Majority Foundation. Empowering Women in Sports, No. 4. Arlington, Va.: Feminist Majority Foundation, 1995. Fulks, Daniel L. Revenues and Expenses of Intercollegiate Athletics Programs: Financial Trends and Relationships, 1993. Overland Park, Kans.: NCAA, 1994. Grant, Christine and Mary Curtis. Gender Equity: Judicial Actions and Related Information. Iowa City, Iowa: University of Iowa, 1996. http://www.arcade.uiowa.edu/proj/ge (cited Mar. 1, 1996). Knight Foundation Commission on Intercollegiate Athletics. Reports of the Knight Foundation Commission on Intercollegiate Athletics: March 1991 - March 1993. Charlotte, N.C.: Knight Foundation Commission on Intercollegiate Athletics, 1993. Lederman, Douglas. “A Chronicle Survey: Men Far Outnumber Women in Division I Sports.” Chronicle of Higher Education (Apr. 8, 1992), p. A1. http://www.chronicle.com (cited Mar. 21, 1996). Lyndon B. Johnson School of Public Affairs. Gender Equity in Intercollegiate Athletics: The Inadequacy of Title IX Enforcement by the U.S. Office for Civil Rights, Working Paper No. 69. Austin, Tex.: University of Texas at Austin, 1993. National Collegiate Athletic Association. Participation Statistics Report, 1982-1995. Overland Park, Kans.: National Collegiate Athletic Association, 1996. National Federation of State High School Associations. 1995 High School Athletics Participation Survey. Kansas City, Mo.: National Federation of State High School Associations, 1995. Raiborn, Mitchell H. Revenues and Expenses of Intercollegiate Athletics Programs: Analysis of Financial Trends and Relationships, 1985-1989. Overland Park, Kans.: NCAA, 1990. Tom, Denise, ed. “Title IX: Fairness on the Field.” USA Today, three-part series (Nov. 7-9, 1995), pp. 4C, 8C. Women’s Basketball Coaches Association. 1994 Survey of WBCA Division I Head Coaches. Lilburn, Ga.: WBCA, 1994. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed Department of Education and National Collegiate Athletic Association (NCAA) efforts to promote gender equity in intercollegiate athletics by implementing title IX of the Education Amendments of 1972, focusing on: (1) steps taken by states to promote gender equity in college athletic programs; and (2) what existing studies show about progress made since 1992 in promoting gender equity in intercollegiate athletics. GAO found that: (1) since 1992, the Department of Education's Office of Civil Rights (OCR) has focused on prevention of title IX violations by clarifying its policies on title IX compliance and increasing technical assistance to help colleges meet title IX requirements while it continues to investigate the relatively few complaints filed each year; (2) NCAA created a task force to examine gender equity issues and now requires certification that athletic programs at all Division I schools meet NCAA-established gender equity requirements; (3) state efforts to promote or ensure gender equity in intercollegiate athletics vary considerably; (4) of the 22 states that reported having laws or other requirements to specifically address gender equity in intercollegiate athletics, 13 reported having full- or part-time staff responsible for gender equity issues; and (5) results from 8 national gender equity studies show gains in the number of women's sports that schools offer, number of female students participating in athletics, and percentage of scholarship funds available to female students, but many women's athletics programs lag behind those for men in the percentage of female head coaches, salaries paid to coaches, and proportion of women athletes to total undergraduate enrollment.
From fiscal years 2001 through 2007, the Employment Litigation Section initiated more than 3,200 matters and filed 60 cases as plaintiff under federal statutes prohibiting employment discrimination. About 90 percent of the matters initiated (2,846 of 3,212) and more than half of the cases filed (33 of 60) alleged violations of section 706 of Title VII of the Civil Rights Act, which involves individual claims of employment discrimination. Much of the Section’s matters are driven by what the Section receives from other agencies. During the 7-year period, about 96 percent of the matters (3,087 of 3,212) initiated were as a result of referrals from the Equal Employment Opportunity Commission and the Department of Labor. The number of matters initiated under section 706 and the Uniformed Services Employment and Reemployment Rights Act (USERRA) declined in the latter fiscal years, which a Section Chief attributed to a decline in referrals from these two agencies. In addition to addressing discrimination against individuals, the Section also initiated more than 100 pattern or practice matters at its own discretion. Because the Section did not require staff to maintain information in ICM on the subjects (e.g., harassment and retaliation) of the matters or the protected class (e.g., race and religion) of the individuals who were allegedly discriminated against, we could not determine this information for more than 80 percent of the matters the Section closed from fiscal years 2001 through 2007. According to Section officials, staff are not required to do so because the Section does not view this information as necessary for management purposes. The Section also does not systematically collect information in ICM on the reasons matters were closed; therefore, we were not able to readily determine this information for the approximately 3,300 matters the Section closed over the time period of our review. Division officials stated that when planning for ICM’s implementation with Section officials, the Division did not consider requiring sections to provide protected class and subject data or the need to capture in ICM the reasons that matters are closed. However, by conducting interviews with agency officials and reviewing files for a nongeneralizable sample of 49 closed matters, we were able to determine that the reasons the Section closed these matters included, among others, the facts in the file would not justify prosecution, the issue was pursued through private litigation, and the employer provided or offered appropriate relief on its own. In addition to the matters initiated, the Employment Litigation Section filed 60 cases in court as plaintiff from fiscal years 2001 through 2007, and filed more than half (33 of 60) under section 706 of Title VII. According to a Section Chief and Deputy Section Chief, the primary reason for pursuing a case was that the case had legal merit. Other priorities, such as those of the Assistant Attorney General, may also influence the Section’s decision to pursue particular kinds of cases. For example, according to Section officials, following the terrorist attacks of September 11, 2001, the Assistant Attorney General asked the various sections within the Division to make the development of cases involving religious discrimination a priority. During the 7-year period, the majority of the section 706 cases (18 of 33) involved sex discrimination against women, and one-third (11 of 33) involved claims of race discrimination, with six cases filed on behalf of African Americans and five cases filed on behalf of whites. In addition to these 33 cases, the Section filed 11 pattern or practice cases. Most of the 11 pattern or practice cases involved claims of discrimination in hiring (9 of 11) and the most common protected class was race (7 of 11), with four cases filed on behalf of African Americans, two on behalf of whites, and one on behalf of American Indians or Alaska Natives. In July 2009, Section officials told us that given that the Assistant Attorneys General who authorized suits from fiscal years 2001 through 2007 and the Section Chief who made suit recommendations to the Assistant Attorneys General during that period are no longer employed by DOJ, it would be inappropriate for them to speculate as to why the Section focused its efforts in particular areas. From fiscal years 2001 through 2007, the Housing and Civil Enforcement Section initiated 947 matters and participated in 277 cases under federal statutes prohibiting discrimination in housing, credit transactions, and certain places of public accommodation (e.g., hotels). The Section has the discretion to investigate matters and bring cases under all of the statutes it enforces, with the exception of certain cases referred under the Fair Housing Act (FHA) from the Department of Housing and Urban Development (HUD), which the Section is statutorily required to file. The Section, however, has discretion about whether to add a pattern or practice allegation to these HUD-referred election cases, if supported by the evidence. Furthermore, the Section has the authority and discretion to independently file pattern or practice cases and to pursue referrals from other sources. During the 7-year period, the Section initiated more matters (517 of 947) and participated in more cases (257 of 277) involving discrimination under the FHA than any other statute or type of matter or case. The Section initiated nearly 90 percent of the FHA matters (456 of 517) under its pattern or practice authority; these primarily alleged discrimination on the basis of race or disability and involved land use/zoning/local government or rental issues. According to Section officials, the large number of land use/zoning/local government matters it initiated was due to the Section regularly receiving referrals from HUD and complaints from other entities on these issues. Additionally, Division officials identified that a Section priority during the 7-year period was to ensure that zoning and other regulations concerning land use were not used to hinder the residential choices of individuals with disabilities. During this time, the Section experienced a general decline in HUD election matters, with the Section initiating the fewest number of total matters, 106, in fiscal year 2007. Section officials attributed the decrease, in part, to a decline in HUD referrals because state and local fair housing agencies were handling more complaints of housing discrimination instead of HUD. The Section initiated the second largest number of matters (252 of 947) under the Equal Credit Opportunity Act (ECOA). About 70 percent (177 of 252) of these ECOA matters included allegations of discrimination based on age, marital status, or both. The majority (250 of 269) of the cases that the Section filed as plaintiff included a claim under the FHA. Similar to the Employment Litigation Section, the Housing Section considers legal merit and whether the plaintiff has the resources to proceed on his or her own should the Section choose not to get involved, among other reasons, when deciding whether to pursue a matter as a case. The number of cases filed by the Section each year generally decreased from fiscal years 2001 through 2007—from 53 to 35—which, similar to matters, Section officials generally attributed to fewer HUD referrals. The FHA cases primarily involved rental issues (146). According to Section officials, the number of rental-related issues is reflective of larger national trends in that discrimination in rental housing may be more frequently reported or easier to detect than in home sales. Most of the FHA cases alleged discrimination on the basis of disability (115) or race (70)—66 of which involved racial discrimination against African Americans. The Section filed 9 cases under ECOA, of which 5 were in combination with the FHA. All 9 complaints involved lending issues. Seven of the 9 complaints included at least one allegation of racial discrimination and 4 included at least one allegation of discrimination on the basis of national origin/ethnicity. From fiscal years 2001 through 2007, the Voting Section initiated 442 matters and filed 56 cases to enforce federal statutes that protect the voting rights of racial and language minorities, disabled and illiterate persons, and overseas and military personnel, among others. The Voting Section has the discretion to initiate a matter or pursue a case under its statutes, with the exception of the review of changes in voting practices or procedures, which it is statutorily required to conduct under section 5 of the Voting Rights Act (VRA). According to Section officials, the Section had as its priority the enforcement of all the statutes for which it was responsible throughout the period covered by our review. However, Section and Division officials identified shifts in the Section’s priorities beginning in 2002. For example, the Assistant Attorney General in place from November 2005 through August 2007 stated that since 2002, the Section had increased its enforcement of the minority language provisions of the VRA and instituted the most vigorous outreach efforts to jurisdictions covered by the minority language provisions of the act. During the 7-year period, the Section initiated nearly 70 percent of VRA matters (246 of 367) on behalf of language minority groups, primarily Spanish speakers (203 of 246). The Section also initiated 162 matters under section 2 of the VRA. The Section initiated about half of these matters on behalf of language minority groups (80), primarily Spanish speakers (71), and about half on behalf of racial minorities (88 of 162), primarily African American voters (71 of 88). During the 7-year period, the Voting Section filed 56 cases, primarily under the VRA (39). The majority of the cases the Section filed in court under the VRA were on behalf of language minority groups (30 of 39), primarily Spanish speakers (27). The Acting Assistant Attorney General reported in September 2008 that the Division had brought more cases under the VRA’s minority language provisions during the past 7 years—a stated priority— than in all other years combined since 1975. While cases involving language minority groups were filed under various VRA provisions, the largest number of cases (24 of 30) involved claims under section 203 alleging that the covered jurisdiction had failed to provide voting-related materials or information relating to the electoral process in the language of the applicable minority group. The Section filed 13 cases involving a claim under section 2 of the VRA––5 on behalf of language minority groups and 10 on behalf of racial minority groups (6 on behalf of Hispanics, 3 on behalf of African Americans, and 1 on behalf of whites). In October 2007, the Section Chief who served from 2005 through late 2007 told us that while at-large election systems that discriminated against African Americans remained a priority of the Section, not many of these systems continued to discriminate, and new tensions over immigration had emerged; therefore, the Section had been pursuing cases of voting discrimination against citizens of other minority groups. However, in September 2009, Voting Section officials stated that while many at-large election systems that diluted minority voting strength have been successfully challenged, the Section continued to identify such systems that discriminate against African American, Hispanic, and Native American residents in jurisdictions throughout the country and that taking action against at-large election systems remained a high priority for the Section. The Section also carried out its responsibilities under section 5 of VRA, which requires certain jurisdictions covered under the act to “preclear” changes to voting practices and procedures with DOJ or the United States District Court for the District of Columbia to determine that the change has neither the purpose nor the effect of discriminating against protected minorities in exercising their voting rights. The Section reported that over the 7-year period it made 42 objections to proposed changes, of which almost 70 percent (29 of 42) involved changes to redistricting plans. More than half (17) of the 29 objections were made in fiscal year 2002, following the 2000 census, and two were made from fiscal years 2005 through 2007. From fiscal years 2001 through 2007, the Special Litigation Section initiated 693 matters and filed 31 cases as plaintiff to enforce federal civil rights statutes in four areas––institutional conditions (e.g., protecting persons in nursing homes), conduct of law enforcement agencies (e.g., police misconduct), access to reproductive health facilities and places of worship, and the exercise of religious freedom of institutionalized persons. Because the Section had discretion to pursue an investigation or case under all of the statutes it enforced, it considered all of its work to be self-initiated. Of the matters initiated and closed (544 of 693), most involved institutional conditions (373) and conduct of law enforcement agencies (129). Of the 31 cases that the Section filed as plaintiff, 27 alleged a pattern or practice of egregious and flagrant conditions that deprived persons institutionalized in health and social welfare (13), juvenile corrections (7), and adult corrections (7) facilities of their constitutional or federal statutory rights, and 3 cases involved the conduct of law enforcement agencies. According to Section officials, in deciding whether or not to pursue a case, they considered the conditions in a particular facility or misconduct of a particular police department and whether the system (e.g., state correctional or juvenile justice system) or department alleged to have violated the statute had taken corrective action or instead had accepted the behavior in question as its way of doing business. However, they said that even if the system or department were taking corrective action, the Section might pursue a case depending on the severity of the situation (e.g., sexual abuse) or if Section officials believed that the facility or local entity were incapable of addressing the problem. Additionally, according to Section officials, the Section sought to ensure its work reflected geographic diversity. Our analysis of the 31 plaintiff cases showed that the Section had filed cases in 21 states and the District of Columbia. During the 7-year period, the Section did not file any cases involving violations of the exercise of religious freedom of institutionalized persons under the Religious Land Use and Institutionalized Persons Act (RLUIPA). Section officials stated that there was a time when the Section’s enforcement of RLUIPA was directed to be a lower priority than its enforcement of other statutes. However, in April 2009, these officials told us that the Section was reviewing a number of preliminary inquires under RLUIPA, but had not yet filed any complaints because it was still investigating these matters. As previously discussed, information regarding the specific protected classes and subjects related to matters and cases and the reasons for closing matters were not systematically maintained in ICM because the Division did not require Sections to capture these data. As a result, the availability and accuracy of protected class and subject data—information that is key to ensuring that the Division executes its charge to enforce statutes prohibiting discrimination on the basis of protected class—varied among the sections. Additionally, neither we nor the Sections could systematically identify the Sections’ reasons for closing matters, including the number of instances in which the Section recommended to proceed with a case and Division management did not approve the Section’s recommendation. By collecting additional data on protected class and subject in ICM, the Division could strengthen its ability to account for the four sections’ enforcement efforts. In October 2006, the Principal Deputy Assistant Attorney General issued a memorandum to section chiefs stating that Division leadership relies heavily on ICM data to, among other things, report to Congress and the public about its enforcement efforts, and should be able to independently extract the data from ICM needed for this purpose. However, over the years, congressional committees have consistently requested information for oversight purposes related to data that the Division does not require Sections to collect in ICM, including information on the specific protected classes and subjects related to matters and cases. While ICM includes fields for collecting these data, the Division has not required sections to capture these data. Some section officials said that they did not believe it was necessary to maintain this information in ICM for internal management purposes. As a result, we found that the availability and accuracy of these data varied among the sections. For example, when comparing data obtained from the 60 complaints the Employment Litigation Section filed in court with data maintained in ICM, we identified that the protected class and subject data in ICM were incomplete or inaccurate for 12 and 29 cases, or about 20 and 48 percent, respectively. Additionally, we found that the Section’s protected class and subject data were not captured in ICM for 2,808 and 2,855 matters, or about 83 and 85 percent, respectively. In contrast, according to the Housing and Civil Enforcement Section, it requires that protected class and subject data be recorded in ICM for all matters and cases, and we found that these data were consistently recorded in ICM. To help respond to information inquiries, all four sections maintain data in ancillary data systems, although some of the data are also recorded in ICM. For example, the Employment Litigation Section maintains broad information on protected class and uses this information in conjunction with data in ICM to report on its enforcement efforts. Section officials reported using ancillary data systems in part because it was easier to generate customized reports than using ICM. We previously reported that agencies with separate, disconnected data systems may be unable to aggregate data consistently across systems, and are more likely to devote time and resources to collecting and reporting information than those with integrated systems. Requiring sections to record these data in ICM would assist the Division in, among other things, responding to inquiries from Congress by ensuring access to readily available information and by reducing reliance on ancillary data systems. Additionally, congressional committees have requested information regarding reasons the Division did not pursue matters, including instances in which Division managers did not approve a section’s recommendation to proceed with a case. However, ICM does not include a discrete field for capturing the reasons that matters are closed and Division officials we interviewed could not identify instances in which Division managers did not approve a section’s recommendation to proceed with a case. Moreover, sections do not maintain this information in other section-level information systems. ICM does have a comment field that sections can use to identify the reasons matters are closed, although these data are not required or systematically maintained in ICM and the Division could not easily aggregate these data using the comment field. According to Division officials, when Division and section officials were determining which data were to be captured in ICM, they did not consider the need to include a discrete field to capture the reasons that matters were closed. As a result, we had to review Division matter files to determine the reasons that matters were closed, and in some instances this information was not contained in the files. For example, for 7 of the 19 section 706 closed matter files we reviewed for the Employment Litigation Section, the reason the matter was closed was not contained in the file documentation we received, and Section officials attributed this to a filing error. Moreover, Division officials stated that because the Division did not track the reasons for closing matters in ICM, they have had to review files and talk with section attorneys and managers to obtain this information. They said that it was difficult to compile this information because of turnover among key section officials. Capturing information on the reasons matters were closed in the Division’s case management system would facilitate the reporting of this information to Congress and enable the Division to conduct a systematic analysis of the reasons that matters were closed. This would also help the Division to determine whether there were issues that may need to be addressed through actions, such as additional guidance from the Division on factors it considers in deciding whether to approve a section’s recommendation to pursue a case. In our September 2009 report, we recommended that to strengthen the Division’s ability to manage and report on the four sections’ enforcement efforts, the Acting Assistant Attorney General of the Division, among other things, (1) require sections to record data on protected class and subject in the Division’s case management system in order to facilitate reporting of this information to Congress, and (2) as the Division considers options to address its case management system needs, determine how sections should be required to record data on the reasons for closing matters in the system in order to be able to systematically assess and take actions to address issues identified. DOJ concurred with our recommendations and, according to Division officials, the Division plans to (1) require sections divisionwide to record data on protected class and subject/issue in its case management system by the end of calendar year 2009 and (2) upgrade the system to include a field on reasons for closing matters and require sections divisionwide to record data in this field. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are Maria Strudwick, Assistant Director, David Alexander; R. Rochelle Burns; Lara Kaskie; Barbara Stolz; and Janet Temko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Civil Rights Division (Division) of the Department of Justice (DOJ) is the primary federal entity charged with enforcing federal statutes prohibiting discrimination on the basis of race, sex, disability, religion, and national origin (i.e., protected classes). The Government Accountability Office (GAO) was asked to review the Division's enforcement efforts and its Interactive Case Management System (ICM). This testimony addresses (1) the activities the Division undertook from fiscal years 2001 through 2007 to implement its enforcement responsibilities through its Employment Litigation, Housing and Civil Enforcement, Voting, and Special Litigation sections, and (2) additional data that could be collected using ICM to assist in reporting on the four sections' enforcement efforts. This statement is based on GAO products issued in September and October 2009. From fiscal years 2001 through 2007, the Civil Rights Division initiated matters and filed cases to implement its enforcement responsibilities through the four sections. The Employment Litigation Section initiated 3,212 matters and filed 60 cases as plaintiff under federal statutes prohibiting employment discrimination. Most matters (3,087) were referred by other agencies. Of the 11 pattern or practices cases--cases that attempt to show that the defendant systematically engaged in discriminatory activities--9 involved claims of discrimination in hiring and the most common protected class was race (7). The Housing and Civil Enforcement Section initiated 947 matters and participated in 277 cases under federal statutes prohibiting discrimination in housing, credit transactions, and certain places of public accommodation. Most (456 of 517) Fair Housing Act (FHA) matters were initiated under its pattern or practice authority, primarily alleging discrimination on the basis of race or disability and involving land use/zoning/local government or rental issues. Most (250 of 269) cases filed as plaintiff included an FHA claim. The FHA cases primarily involved rental issues (146) and alleged discrimination on the basis of disability (115) or race (70). The Voting Section initiated 442 matters and filed 56 cases to enforce federal statutes that protect the voting rights of racial and language minorities, and disabled and illiterate persons, among others. The Section initiated most matters (367) and filed a majority of cases (39) as plaintiff under the Voting Rights Act, primarily on behalf of language minority groups (246 and 30). The Special Litigation Section initiated 693 matters and filed 31 cases as plaintiff to enforce federal civil rights statutes on institutional conditions (e.g., protecting people in nursing homes), the conduct of law enforcement agencies, access to reproductive health facilities and places of worship, and the exercise of religious freedom of institutionalized persons. The largest number of matters initiated and closed (544 of 693) involved institutional conditions (373), as did the cases filed (27). Information on the specific protected classes and subjects related to matters and cases and the reasons for closing matters were not systematically maintained in ICM because the Division did not require sections to capture these data. As a result, the availability and accuracy of these data varied among the sections. For example, the Employment Litigation Section did not capture protected class and subject data for more than 80 percent of its matters. In contrast, these data were consistently recorded in ICM for the Housing and Civil Enforcement Section, which requires that protected class and subject data be recorded in ICM. In addition, congressional committees have requested information on reasons the Division did not pursue matters, including instances in which Division managers did not approve a section's recommendation to proceed with a case. However, ICM does not include a discrete field for capturing the reasons that matters are closed and Division officials we interviewed could not identify instances in which Division managers did not approve a section's recommendation to proceed with a case. By requiring sections to record such information, the Division could strengthen its ability to account for its enforcement efforts.
The MIG has taken three different approaches since establishing the NMAP—test audits, Medicaid Statistical Information System (MSIS) audits, and collaborative audits. In each approach, contractors conducted post payment audits, that is, they reviewed medical documentation and other information related to Medicaid claims that had already been paid. The key differences among the three NMAP approaches were (1) the data sources used to identify audit targets, and (2) the roles assigned to states and contractors. In June 2007, the MIG hired a contractor to conduct test audits, and it implemented MSIS audits beginning in December 2007 by hiring separate review and audit contractors for each of five geographic areas of the country. Collaborative audits were introduced in January 2010. In June 2007, the MIG hired a contractor to conduct test audits in five states. Working with the MIG and the states, the contractor audited 27 providers. States provided the initial audit targets based on their own analysis of Medicaid Management Information System (MMIS) data. MMIS are mechanized claims processing and information retrieval systems maintained by individual states, and generally reflect real-time payments and adjustments of detailed claims for each health care service provided. In some cases, states provided samples of their claims data with which to perform the audits, and in other cases states provided a universe of paid claims that the MIG’s contractor analyzed to derive the sample. Twenty-seven test audits were conducted on hospitals, physicians, dentists, home health agencies, medical transport vendors, and durable medical equipment providers. In December 2007, while test audits were still under way, the MIG began hiring review and audit contractors to implement MSIS audits.audits differed from the test audits in three ways. First, MSIS audit targets were selected based on the analysis of Medicaid Statistical Information System (MSIS) data. MSIS is a national data set collected and maintained by CMS consisting of extracts from each state’s MMIS, including eligibility files and paid claims files that were intended for health care research and evaluation activities but not necessarily for auditing. As a subset of states’ more detailed MMIS data files, MSIS data do not include elements that can assist in audits, such as the explanations of benefit codes and the names of providers and beneficiaries. In addition, MSIS data are not as timely because of late state submissions and the time it takes CMS’s contractor to review and validate the data. MIG officials told us that they chose MSIS data because the data were readily available for all states and the state-based MMIS data would require a significant amount of additional work to standardize across states. (See table 1 below.) Second, MSIS audits were conducted over a wider geographic area; 44 states have had MSIS audits, compared with the small number of states selected for test audits. Third, MSIS audits use two types of contractors—review contractors to conduct data analysis and help identify audit leads, and audit contractors to conduct the audits. In the test audits, the states provided the initial audit leads. Review contractors. The MIG’s two review contractors analyze MSIS data to help identify potential audit targets in an analytic process known as data mining. The MIG issues monthly assignments to these contractors, and generally allows the contractors 60 days to complete them. For each assignment, the MIG specifies the state, type of Medicaid claims data, range of service dates, and algorithm (i.e., a specific set of logical rules or criteria used to analyze the data). The work of the review contractor is summarized in an algorithm findings report, which contains lists of providers ranked by the amount of their potential overpayment. The January through June 2010 algorithm reports reviewed by the HHS-OIG The identified 113,378 unique providers from about 1 million claims.MIG’s Division of Fraud Research & Detection oversees the technical work of the review contractors. A summary of the review contractor activities for MSIS audits is shown in figure 1. Audit contractors. The MIG’s audit contractors conduct postpayment audits of Medicaid providers. Audit leads are selected by the MIG’s Division of Field Operations, generally by looking at providers across one or more applicable algorithms to determine if they have been overpaid or demonstrated egregious billing patterns. From the hiring of audit contractors in December 2007 through February 2012, the division assigned 1,550 MSIS audits to its contractors. During an audit, the contractor may request and review copies of provider records, interview providers and office personnel, or visit provider facilities. If an overpayment is identified, the contractor drafts an audit report, which is shared with the provider and the state. Ultimately, the state is responsible for collecting any overpayments in accordance with state law and must report this information to CMS. A summary of the audit contractor activities is shown in figure 2. In June 2011, CMS released its fiscal year 2010 report to Congress, which outlined a redesign of the NMAP with an approach that closely resembled the test audits. The report described the redesign as an effort to enhance the NMAP and assist states with their program integrity priorities. CMS refers to this new approach as collaborative audits. In these collaborative audits, MIG and its contractor primarily used MMIS data and leveraged state resources and expertise to identify audit targets. In contrast, MSIS audits used separate review contractors and MSIS data to generate lists of providers with potential overpayments, and the MIG selected the specific providers to be audited. From June 2007 through February 2012, payments to the contractors for On an annual test, MSIS, and collaborative audits totaled $102 million.basis, these contractor payments account for more than 40 percent of all of the MIG’s expenditures on Medicaid program integrity activities. Contractor payments rose from $1.3 million in fiscal year 2007 and reached $33.7 million in fiscal year 2011. (See fig. 3.) The total cost of the NMAP is likely greater than $102 million because that figure does not include expenditures on the salaries of MIG staff that support the operation of the program. The MSIS audits were less effective in identifying potential overpayments than test and collaborative audits. The main reason for the difference in audit results was the use of MSIS data. According to MIG officials, they chose MSIS data because the data were readily available for all states, they are collected and maintained by CMS, and are intended for health care research and evaluation activities. However, the MSIS audits were not well coordinated with states, and duplicated and diverted resources from states’ program integrity activities. Compared with test and collaborative audits, the return on MSIS audits was significantly lower. As of February 2012, a small fraction of the 1,550 MSIS audits identified $7.4 million in potential overpayments. In contrast, 26 test audits and 6 collaborative audits together identified $12.5 million in potential overpayments (see fig. 4.) Appendix II provides details on the characteristics of MSIS audits that successfully identified overpayments. While the newer collaborative audits have not yet identified more in overpayments than MSIS audits, only 6 of the 112 collaborative audits have final audit reports (see app. III), suggesting that the total overpayment amounts identified through collaborative audits will continue to grow. In addition, the MSIS audits identified potential overpayments for much smaller amounts. Half of the MSIS audits were for potential overpayments of $16,000 or less, compared to a median of about $140,000 for test audits and $600,000 for collaborative audits. The use of MSIS data was the principal reason for the poor MSIS audit results, that is, the low amount of potential overpayments identified and the high proportion of unproductive audits. Over two-thirds (69 percent) of the 1,550 MSIS audits assigned to contractors through February 2012 were unproductive, that is, they were discontinued (625), had low or no findings (415), or were put on hold (37). (See fig. 5.) Our findings are consistent with an early assessment of the MIG’s audit contractors, which cited MSIS data issues as the top reason that MSIS audits identified a lower amount of potential overpayments. State program integrity officials, the HHS-OIG, and its audit contractors told the MIG that MSIS data would result in many false leads because the data do not contain critical audit elements, including provider identifiers; procedure, product, and service descriptions; billing information; and beneficiary and eligibility information. For example, the MIG assigned 81 MSIS audits in one state because providers appeared to be billing more than 24 hours of service in a single day. However, all of these audits were later discontinued because the underlying data were incomplete and thus misleading; the audited providers were actually large practices with multiple personnel, whose total billing in a single day legitimately exceeded 24 hours. One state official told us that when states met with the MIG staff during the roll out of the Medicaid Integrity Program, the state officials emphasized that (1) MSIS data could not be used for data mining or auditing because they were ‘stagnant,’ i.e., MSIS does not capture any adjustments that are subsequently made to a claim and (2) MMIS data were current and states would be willing to share their MMIS data with CMS. In their annual lessons-learned reports, the audit and review contractors told the MIG that the MSIS data were not timely or accurate, and recommended that the MIG help them obtain access to state MMIS data.based audits to its contractors; 78 percent of MSIS audits (1,208) were assigned after the August 2009 HHS-OIG report. Nevertheless, the MIG continued to assign MSIS- MIG officials told us that they chose MSIS data because the data were readily available for all states, they are collected and maintained by CMS, and are intended for health care research and evaluation activities. However, when considering the use of MSIS data, officials said that they were aware that the MSIS data had limitations for auditing and could produce many false leads. MIG officials also told us that collecting states’ MMIS data would have been burdensome for states and would have resulted in additional work for the review contractors because they would need to do a significant amount of work to standardize the MMIS data to address discrepancies between the states’ data sets. However, officials in 13 of the 16 states we contacted volunteered that they were willing to provide the MIG with MMIS data if asked to do so. In addition, the review contractors have had to do some work to standardize the state files within the MSIS maintained by CMS. The MIG did not effectively coordinate MSIS audits with states and as a result, the MIG duplicated state program integrity activities. Officials from several states we interviewed noted that some of the algorithms used by the review contractor were identical to or less sophisticated than the algorithms they used to identify audit leads. An official in one state told us that even after informing the contractor that its work would be duplicative, the review contractor ran the algorithm anyway. Officials in another state told us that the MIG was unresponsive to state assertions that it had a unit dedicated to reviewing a specific category of claims and the MIG was still pursuing audits for this provider type. State officials also cited general coordination challenges, including difficulty communicating with contractors. MIG officials acknowledged that poor communications resulted in the pursuit of many false leads that had not been adequately vetted by the states. In addition, representatives of a review contractor we interviewed told us that states did not always respond to requests to validate overpayments in the algorithm samples provided and the MIG may not have been aware of the lack of a state response when making audit assignments. State officials we interviewed told us that the review contractors’ lack of understanding of state policy also contributed to the identification of false leads, even though (1) the MIG required its contractors to become familiar with each state’s Medicaid program, and (2) the MIG reviewed state policies as a quality assurance step prior to assigning leads to its audit contractors. Nonetheless, one state official described how the MIG and its review and audit contractors had mistakenly identified overpayments for federally qualified health centers because they assumed that centers should receive reduced payments for an established patient on subsequent visits. In fact, centers are paid on an encounter basis, which uses the same payment rate for the first and follow-up visits. Officials in seven of the states we spoke with described the resources involved in assisting the MIG and its contractors. For example, states told us that they had assigned staff to: (1) review the algorithms used by review contractors to generate potential audit leads; (2) review lists of audit leads created by the MIG; and (3) provide information and training on state-level policies to audit contractors. One state official described how it had clinical staff rerun algorithms using the state’s data system to see if the audit targets chosen by the MIG had merit.state staff found that the MIG was pursuing a false lead, the state had to provide the MIG and its contractors with detailed explanations of why the suspect claims complied with state policies. While the state officials we spoke with did not estimate the cost of their involvement in MSIS audits, officials in some states said that participation in MSIS audits diverted staff from their regular duties. MIG officials told us they were sensitive to state burden and had attempted to minimize it. MIG’s redesigned NMAP focuses on collaborative audits, which may enhance state Medicaid program integrity activities, and it also intends to continue using MSIS data in some audits. As part of its NMAP redesign, the MIG has assigned new activities to its review contractors, but it is too early to assess their benefit. CMS has not reported to Congress key details about the changes it is making to the NMAP, including the rationale for the redesign of the program, but it plans to discuss these changes in its upcoming 2012 strategic plan. As part of its redesign, the MIG launched collaborative audits with a small number of states in early 2010 to enhance the MIG’s program and assist states with their own program integrity priorities. The MIG did not have a single approach for collaborative audits. For example, one state told us that the MIG’s audit contractor suggested that together they discuss conducting a collaborative audit with the MIG while in another state a collaborative audit was initiated by the MIG, with the audit contractor’s role limited to assistance during the audit (rather than leading it). Generally, collaborative audits allow states to augment their own program integrity audit capacity by leveraging MIG’s and its contractors’ resources. For example, officials from six of the eight states we interviewed told us the services targeted for collaborative audits were those that the state did not have sufficient resources to effectively audit on its own. In some cases, the MIG’s contractor staff conducted additional audits. In others, contractors were used to assess the medical necessity of claims when the states’ programs needed additional clinical expertise to make a determination. Officials from most of the states we interviewed agreed that the investment in collaborative audits was worthwhile but some told us that collaborative audits created some additional work for states. For example, two state programs reported that their staff was involved in training the MIG’s contractor staff. In one of these states, state program staff dedicated a full week to train the MIG’s audit contractor so that the contractor’s work would be in accordance with state policies. Another state program official reported that staff had to review all audits and overpayment recovery work, leading to a “bottleneck” in the state’s own program integrity activities. Officials in one state suggested that the collaborative audits could be improved if the MIG formalized a process for communicating and resolving disagreements related to audit reports, and minimized the changing of contractors in order to reduce the burden on states. Most states were in favor of expanding the number of collaborative audits. According to the MIG, the agency plans to expand its use of collaborative audits to as many states as are willing to participate. In fact, officials indicated that they are discussing collaborative audits with an additional 12 states. MIG officials noted that they do not foresee the collaborative audits completely replacing audits based on MSIS data. According to MIG officials, NMAP audits using MSIS data might be appropriate in certain situations, including audits of state-owned and operated facilities and states that are not willing to collaborate, as part of the MIG’s oversight responsibilities. The MIG recognizes that MSIS-based audits are hampered by deficiencies in the data, and noted that CMS has initiatives under way to address these deficiencies through the Medicaid and CHIP Business Information and Solutions Council (MACBIS). MACBIS is an internal CMS governance body responsible for data planning, ongoing projects, and information product development. According to MIG officials, MACBIS projects include efforts to reduce the time from state submission of MSIS data to the availability of these data; automation of program data; improvements in encounter data reporting; and automation, standardization, and other improvements in MSIS data submissions. One MACBIS project is known as Transformed MSIS (T-MSIS), which aims to add 1,000 additional variables to MSIS for monitoring program integrity and include more regular MMIS updates. MIG officials told us that CMS is currently engaged in a 10-state pilot to develop the data set for T-MSIS, and anticipates that T-MSIS will be operational in 2014. As part of its NMAP redesign, the MIG has assigned new activities to the review contractors. Because these activities are new, it is too early to assess their benefit. Although the review contractors were not involved in early collaborative audits, the MIG expects that they will be involved in future collaborative audits based on these new activities. In redesigning the NMAP, the MIG tasked its review contractors in November 2011 with using MSIS data to compare state expenditures for a specific service to the national average expenditure for that service to identify states with abnormally high expenditures. Once a state (or states) with high expenditures is identified, then discussions are held with the states about their knowledge of these aberrations and recovery activities related to the identified expenditures. According to MIG officials, such cross-state analyses were recently initiated and thus have not yet identified any potential audit targets. The review contractor also indicated that it would continue to explore other analytic approaches to identify causes of aberrant state expenditures. Additionally, as part of its redesign of the program’s audits, the MIG instructed its review contractors in November 2011 to reexamine successful algorithms from previously issued final algorithm reports. According to the MIG, the purpose of this effort is to identify the factors that could better predict promising audit targets and thereby improve audit target selection in the future. Although some MSIS audits identified potential overpayments, the value of developing a process using MSIS data to improve audit target selection in the future is questionable.According to the MIG, MSIS audits are continuing but on a more limited scale and with closer collaboration between states and the MIG’s contractors. In its 2010 annual report to Congress on the Medicaid Integrity Program, CMS announced that it was redesigning the NMAP in an effort to enhance MIG programs and assist states with their program integrity priorities, but it did not provide key details regarding the changes. For example, the report did not mention that the MSIS audits had a poor return on investment, the number of unproductive audits, or the reasons for the unproductive audits. Moreover, since issuing its 2010 annual report, CMS has assigned new tasks to its review contractors such as reexamining old final algorithm reports to improve provider target selection and new cross-state analyses using MSIS data. But CMS has not yet articulated for Congress how these activities complement the redesign or how such activities will be used to identify overpayments. The MIG is preparing a new strategic plan—its Comprehensive Medicaid Integrity Plan covering Fiscal Years 2013 through 2017—which it plans to submit to Congress in the summer of 2012. According to MIG officials, the new strategic plan will generally describe shortcomings in the NMAP’s original design and how the redesign will address those shortcomings. However, MIG officials told us that they do not plan to discuss the effectiveness of the use of funds for MSIS audits, or explain how the MIG will monitor and evaluate the redesign. In its fiscal year 2013 HHS budget justification for CMS, the department indicated that in the future CMS would not report separately on the NMAP return on investment. HHS explained that it had become apparent that the ability to identify overpayments is not, and should not be, limited to the activities of the Medicaid integrity contractors. Rather, HHS said it is considering new measures that better reflect the resources invested through the Medicaid Integrity Program. Federal internal control standards provide that effective program plans are to clearly define needs, tie activities to organizational objectives and goals, and include a framework for evaluation and monitoring. Based on these standards, the poor performance of the MSIS audits should have triggered an evaluation of the program, particularly given the DRA requirement for CMS to report annually to Congress on the effectiveness of the use of funds appropriated for the Medicaid Integrity Program. In approximately 5 years of implementation, the MIG has spent at least $102 million on contractors for an audit program that has identified less than $20 million in potential overpayments. Moreover, almost two-thirds of these potential overpayments were identified in a small number of test and collaborative audits that used different data and took a different approach to identifying audit targets than the MSIS audits, which comprised the vast majority of the program’s audits. The poor performance of the MSIS audits can largely be traced to the MIG’s decision to use MSIS data to generate audit leads, although evidence showed that (1) these data were inappropriate for auditing, and (2) alternative data sources were both available and effective in identifying potential overpayments. Ineffective coordination with states and a limited understanding of state Medicaid policies on the part of the MIG’s contractors also contributed to the poor results of the MSIS audits. Although the MIG recognizes that the MSIS audits have performed far below expectations, it has not quantified how expenditures to date have compared with identified recoveries. Currently, the MIG is experimenting with a promising approach in which the states identify appropriate targets, provide the more complete MMIS data, and actively participate in the audits. This collaborative audit approach has identified $4.4 million in potential overpayments and is largely supported by the states we spoke with, even though they may have to invest their own resources in these audits. However, the MIG has not articulated how its redesign will address flaws in NMAP and it also plans to continue using MSIS data, despite their past experience with these data, for cross-state analysis and for states that are not willing to participate in collaborative audits. At this time, the MIG is preparing a new comprehensive plan for Congress that outlines the components of the NMAP redesign. The details provided in such a plan will be critical to evaluating the effectiveness of the redesign and the agency’s long-term plan to improve the data necessary to conduct successful audits. Transparent communications and a well- articulated strategy to monitor and continuously improve NMAP are essential components of any plan seeking to demonstrate that the MIG can effectively manage the program. To effectively redirect the NMAP toward more productive outcomes and to improve reporting under the DRA, the CMS Administrator should ensure that the MIG’s planned update of its comprehensive plan (1) quantifies the NMAP’s expenditures and audit outcomes; (2) addresses any program improvements; and (3) outlines plans for effectively monitoring the NMAP program, including how to validate and use any lessons learned or feedback from the states to continuously improve the audits; future annual reports to Congress clearly address the strengths and weaknesses of the audit program and its effectiveness; and use of NMAP contractors supports and expands states’ own program integrity audits, engages additional states that are willing to participate in collaborative audits, and explicitly considers state burden when conducting audit activities. We provided a draft of this report to HHS for comment. In its written comments, HHS stated that we had not appropriately recognized the progress CMS has made in evaluating and improving the Medicaid Integrity Program, which included the agency’s redesign of NMAP. Collaborative audits were the core of that redesign. HHS described CMS’s redesign approach as a phased one in which not all elements had been finalized when the agency announced the redesign in its June 2011 annual report to Congress (covering fiscal year 2010). HHS also commented that we did not fully describe the reasons for CMS’s use of MSIS data. HHS partially concurred with our first recommendation and fully concurred with the other two recommendations. HHS’s comments are reproduced in appendix IV. Although we characterized collaborative audits as a promising new approach, HHS commented that we (1) did not acknowledge that CMS had presented its rationale for the NMAP redesign in the agency’s June 2011 annual report to the Congress, and (2) inappropriately criticized CMS for not including other redesign details in its report, which HHS said had not yet been finalized. We continue to believe that a full articulation of the redesign should include transparent reporting of the results of the MSIS audits. However, we agree that the June 2011 report, which was released 18 months after the initiation of collaborative audits, described their advantages—use of better data, augmenting state resources, and providing analytic support for states lacking that capability. Regarding the use of MSIS data, HHS stated that we do not fully describe CMS’s reason for its use or acknowledge that CMS sought alternative data sources to supplement or replace MSIS data. We disagree because our report provides CMS’s reasons for using MSIS data, acknowledges CMS’s awareness of the MSIS data limitations, and discusses its Transformed MSIS project to improve the quality of MSIS data. In addition, we pointed out that officials in 13 of the 16 states we contacted volunteered that they were willing to provide CMS with their own more complete and timely MMIS data. We agree with HHS’s comment that not all of CMS’s plans for the redesign may have been complete at the time the June 2011 annual report to Congress was being finalized and therefore could not have been included in that report. We have revised this report to acknowledge that some of the elements of the redesign may not have been initiated until after the June 2011 report was finalized. HHS agreed with two of three elements related to our first recommendation regarding CMS’s planned update of its Comprehensive Medicaid Integrity Plan covering fiscal years 2013 to 2017. HHS agreed that the planned update should (1) address any NMAP improvements proposed by CMS, and (2) outline CMS’s plans for effectively monitoring the NMAP. HHS commented that CMS considers transparency of the program’s performance to be a top priority. However, HHS did not concur that the update should quantify NMAP’s expenditures and audit outcomes; CMS considers such information to be more appropriately presented in the annual reports to Congress, which already includes dollar figures on annual expenditures for NMAP and overpayments identified in each fiscal year. CMS’s annual reports to Congress have provided a snapshot of results that did not differentiate between the effectiveness of the various audit approaches used. For example, in its annual report covering fiscal year 2010, CMS reported that 947 audits were underway in 45 states and that its contractors had identified cumulative potential overpayments of about $10.7 million. Based on our analysis of CMS’s data, MSIS audits had only identified overpayments of $2.4 million as of September 30, 2010. The remaining $8.4 million in overpayments were identified during the test audit phase, in which states identified the audit targets and supplied their own MMIS data. We continue to believe that CMS should more fully report on NMAP expenditures and audit outcomes in its annual reports and provide an overall assessment of NMAP in its next comprehensive plan. HHS concurred with our recommendation that CMS should clearly address NMAPs strengths, weaknesses, and effectiveness in the agency’s annual reports to Congress. HHS noted that in CMS’s December 7, 2011 congressional testimony the agency had reported its awareness of the limitations of MSIS data and outlined steps to improve contractors’ access to better quality Medicaid data. HHS also concurred with our recommendation that CMS’s use of NMAP contractors should support and expand states’ own audit activities, engage other willing states, and explicitly consider state burden when conducting collaborative audits. HHS reported that since February 2012 CMS had increased the number of collaborative audits by 25—from 112 audits in 11 states to 137 in 15 states. Based on HHS comments, we made technical changes as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributions to this report are listed in appendix V. Total (percent) 59 (4) 296 (19) 355 (23) 118 (8) 1,077 (69) 1,550 (100) The 59 MSIS audits that successfully identified potential overpayments were conducted in 16 states, and most of these audits involved hospitals (30 providers) and pharmacies (17 providers). These provider types also had the highest potential overpayments—over $6 million for hospitals and $600,000 for pharmacies. Arkansas and Florida accounted for over half of the audits that identified potential overpayments, but the most substantial overpayments were in Delaware ($4.6 million) and the District of Columbia ($1.7 million). (See tables 3 and 4.) Total (percent) 18 (16) 6 (5) 24 (21) 85 (76) 3 (3) 112 (100) Carolyn L. Yocom at (202) 512-7114 or yocomc@gao.gov. In addition to the contact named above, key contributors to this report were: Water Ochinko, Assistant Director; Sean DeBlieck; Leslie V. Gordon; Drew Long; and Jasleen Modi. National Medicaid Audit Program: CMS Should Improve Reporting and Focus on Audit Collaboration with States. GAO-12-814T. Washington, D.C.: June 14, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Medicaid: Federal Oversight of Payments and Program Integrity Needs Improvement. GAO-12-674T. Washington, D.C.: April 25, 2012. Medicaid Program Integrity: Expanded Federal Role Presents Challenges to and Opportunities for Assisting States. GAO-12-288T. Washington, D.C.: December 7, 2011. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Status of Fiscal Year 2010 Federal Improper Payments Reporting. GAO-11-443R. Washington, D.C.: March 25, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Medicare: Program Remains at High Risk Because of Continuing Management Challenges. GAO-11-430T. Washington, D.C.: March 2, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-1004T. Washington, D.C.: September 30, 2009. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-957. Washington, D.C.: September 9, 2009. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-239T. Washington, D.C.: November 14, 2007. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-17. Washington, D.C.: November 14, 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Medicaid Integrity: Implementation of New Program Provides Opportunities for Federal Leadership to Combat Fraud, Waste, and Abuse. GAO-06-578T. Washington, D.C.: March 28, 2006.
Medicaid, the joint federal-state health care financing program for certain low-income individuals, has the second-highest estimated improper payments of any federal program. The Deficit Reduction Act of 2005 expanded the federal role in Medicaid program integrity, and the Centers for Medicare & Medicaid Services (CMS), the federal agency that oversees Medicaid, established the MIG, which designed the NMAP. Since the NMAP’s inception, the MIG has used three different audit approaches: test, MSIS, and collaborative. This report focuses on (1) the effectiveness of the MIG’s implementation of NMAP, and (2) the MIG’s efforts to redesign the NMAP. To do this work, GAO analyzed MIG data, reviewed its contractors’ reports, and interviewed MIG officials, contractor representatives, and state program integrity officials. Compared to the initial test audits and the more recent collaborative audits, the majority of the Medicaid Integrity Group’s (MIG) audits conducted under the National Medicaid Audit Program (NMAP) were less effective because they used Medicaid Statistical Information System (MSIS) data. MSIS is an extract of states’ claims data and is missing key elements, such as provider names, that are necessary for auditing. Since fiscal year 2008, 4 percent of the 1,550 MSIS audits identified $7.4 million in potential overpayments, 69 percent did not identify overpayments, and the remaining 27 percent were ongoing. In contrast, 26 test audits and 6 collaborative audits—which used states’ more robust Medicaid Management Information System (MMIS) claims data and allowed states to select the audit targets—together identified more than $12 million in potential overpayments. Furthermore, the median amount of the potential overpayment for MSIS audits was relatively small compared to test and collaborative audits. The MIG reported that it is redesigning the NMAP, but has not provided Congress with key details about the changes it is making to the program, including the rationale for the change to collaborative audits, new analytical roles for its contractors, and its plans for addressing problems with the MSIS audits. Early results showed that this collaborative approach may enhance state program integrity activities by allowing states to leverage the MIG’s resources to augment their own program integrity capacity. However, the lack of a published plan detailing how the MIG will monitor and evaluate NMAP raises concerns about the MIG’s ability to effectively manage the program. Given that NMAP has accounted for more than 40 percent of MIG expenditures, transparent communications and a strategy to monitor and continuously improve NMAP are essential components of any plan seeking to demonstrate the MIG’s effective stewardship of the resources provided by Congress. GAO recommends that the CMS Administrator ensure that the MIG’s (1) update of its comprehensive plan provide key details about the NMAP, including its expenditures and audit outcomes, program improvements, and plans for effectively monitoring the program; (2) future annual reports to Congress clearly address the strengths and weaknesses of the audit program and its effectiveness; and (3) use of NMAP contractors supports and expands states’ own program integrity efforts through collaborative audits. HHS partially concurred with GAO’s first recommendation commenting that CMS’s annual report to Congress was a more appropriate vehicle for reporting NMAP results than its comprehensive plan. HHS concurred with the other two recommendations.
Although our high-risk designation covers only DOD’s program, our reports have also documented clearance-related problems affecting other agencies. For example, our October 2007 report on state and local information fusion centers cited two clearance-related challenges: (1) the length of time needed for state and local officials to receive clearances from the Federal Bureau of Investigation (FBI) and the Department of Homeland Security (DHS) and (2) the reluctance of some federal agencies—particularly DHS and FBI—to accept clearances issued by other agencies (i.e., clearance reciprocity). Similarly, our April 2007 testimony on maritime security and selected aspects of the Security and Accountability for Every Port Act (SAFE Port Act) identified the challenge of obtaining clearances so that port security stakeholders could share information through area committees or interagency operational centers. The SAFE Port Act includes a specific provision requiring the Secretary of Homeland Security to sponsor and expedite individuals participating in interagency operational centers in gaining or maintaining their security clearances. Our reports have offered findings and recommendations regarding current impediments, and they offer key factors to consider in future reforms. For example, as the interagency security clearance process reform team develops a new governmentwide end-to-end clearance system, this reform effort provides an opportune time to consider factors for evaluating intermediate steps and the final system in order to optimize efficiency and effectiveness. The Director of National Intelligence’s July 25, 2007, memorandum provided the terms of reference for the security clearance process reform team and noted that a future Phase IV would be used to perform and evaluate demonstrations and to finalize the acquisition strategy. In designing a new personnel security clearance system, the Government Performance and Results Act of 1993 (GPRA) may be a useful resource for the team designing the system and the congressional committees overseeing the design and implementation. GPRA provides a framework for strategic performance planning and reporting intended to improve federal program effectiveness and hold agencies accountable for achieving results. Agencies that effectively implement GPRA’s results-oriented framework clearly establish performance goals for which they will be held accountable, measure progress towards those goals, determine strategies and resources to effectively accomplish the goals, use performance information to make the programmatic decisions necessary to improve performance, and formally communicate results in performance reports. Our reports have also identified a number of directly relevant factors, such as those found in our November 2005 testimony that evaluated an earlier governmentwide plan for improving the personnel security clearance process. I will address the need for consideration of four key factors in my testimony: (1) a strong requirements-determination process, (2) quality emphasis in all clearance processes, (3) additional metrics to provide a fuller picture of clearance processes, and (4) long-term funding requirements of security clearance reform. The interagency security clearance process reform team established in July 2007 might want to address whether the numbers and levels of clearances are appropriate since this initial stage in the clearance process can affect workloads and costs in other clearance processes. For instance, the team may want to examine existing policies and practices to see if they need to be updated or otherwise modified. We are not suggesting that the numbers and levels of clearances are or are not appropriate—only that any unnecessary requirements in this initial phase use government resources that can be utilized for other purposes such as building additional quality into other clearance processes or decreasing delays in clearance processing. Figure 1 highlights the fact that the clearance process begins with establishing whether an incumbent’s position requires a clearance, and if so, at what level. The numbers of requests for initial and renewal clearances and the levels of such clearance requests (phase 2 in fig. 1) are two ways to look at outcomes of requirements setting in the clearance process. In our prior work, DOD personnel, investigations contractors, and industry officials told us that the large number of requests for investigations could be attributed to many factors. For example, they ascribed the large number of requests to the heightened security concerns that resulted from the September 11, 2001, terrorist attacks. They also attributed the large number of investigations to an increase in the operations and deployments of military personnel and to the increasingly sensitive technology that military personnel, government employees, and contractors come in contact with as part of their jobs. While having a large number of cleared personnel can give the military services, agencies, and industry a great deal of flexibility when assigning personnel, the investigative and adjudicative workloads that are required to provide the clearances and flexibility further tax a clearance process that already experiences delays in determining clearance eligibility. A change in the level of clearances being requested also increases the investigative and adjudicative workloads. For example, in our February 2004 report on impediments to eliminating clearance backlogs, we found that a growing percentage of all DOD requests for clearances for industry personnel was at the top secret level: 17 percent of those requests were at the top secret level in 1995 but 27 percent were at the top secret level in 2003. This increase of 10 percentage points in the proportion of investigations at the top secret level is important because top secret clearances must be renewed twice as often as secret clearances (i.e., every 5 years versus every 10 years). In August 2006, OPM estimated that approximately 60 total staff hours are needed for each investigation for an initial top secret clearance and 6 total staff hours are needed for the investigation to support a secret or confidential clearance. The doubling of the frequency along with the increased effort to investigate and adjudicate each top secret reinvestigation adds costs and workload for the government. Cost. For fiscal year 2008, OPM’s standard billing rate is $3,711 for an investigation for an initial top secret clearance; $2,509 for an investigation to renew a top secret clearance, and $202 for an investigation for a secret clearance. The cost of getting and maintaining a top secret clearance for 10 years is approximately 30 times greater than the cost of getting and maintaining a secret clearance for the same period. For example, an individual getting a top secret clearance for the first time and keeping the clearance for 10 years would cost the government a total of $6,202 in current year dollars ($3,711 for the initial investigation and $2,509 for the reinvestigation after the first 5 years). In contrast, an individual receiving a secret clearance and maintaining it for 10 years would result in a total cost to the government of $202 ($202 for the initial clearance that is good for 10 years). Time/Workload. The workload is also affected by the scope of coverage in the various types of investigations. Much of the information for a secret clearance is gathered through electronic files. The investigation for a top secret clearance, on the other hand, requires the information needed for the secret clearance as well as data gathered through time-consuming tasks such as interviews with the subject of the investigation request, references in the workplace, and neighbors. Since (1) the average investigative report for a top secret clearance takes about 10 times as many investigative staff hours as the average investigative report for a secret clearance and (2) the top secret clearance must be renewed twice as often as the secret, the investigative workload increases about 20-fold. Additionally, the adjudicative workload increases about 4-fold. In 2007, DOD officials estimated that it took about twice as long to review an investigative report for a top secret clearance, which would need to be done twice as often as the secret clearance. Unless the new system developed by the interagency security clearance process reform team includes a sound requirements process, workload and costs may be higher than necessary. Since the late 1990s, GAO has emphasized a need to build more quality and quality monitoring into clearance processes to achieve positive goals such as promoting greater reciprocity and maximizing the likelihood that individuals who are security risks will be scrutinized more closely. In our November 2005 testimony on the earlier governmentwide plan to improve the clearance process, we noted that the plan devoted little attention to monitoring and improving the quality of the personnel security clearance process, and that limited attention and reporting about quality continue. When OMB issued its February 2007 Report of the Security Clearance Oversight Group Consistent with Title III of the Intelligence Reform and Terrorism Prevention Act of 2004, it documented quality with a single metric. Specifically, it stated that OPM has developed additional internal quality control processes to ensure that the quality of completed investigations continue to meet the national investigative standards. OMB added that, overall, less than 1 percent of all completed investigations are returned to OPM from the adjudicating agencies for quality deficiencies. When OMB issued its February 2008 Report of the Security Clearance Oversight Group, it did not discuss the percentage of completed investigations that are returned to OPM or the development or existence of any other metric measuring the level of quality in security clearance processes or products. As part of our September 2006 report, we examined a different aspect of quality—the completeness of documentation in investigative and adjudicative reports. We found that OPM provided incomplete investigative reports to DOD adjudicators, which the adjudicators then used to determine top secret clearance eligibility. Almost all (47 of 50) of the sampled investigative reports we reviewed were incomplete based on requirements in the federal investigative standards. In addition, DOD adjudicators granted clearance eligibility without requesting additional information for any of the incomplete investigative reports and did not document that they considered some adjudicative guidelines when adverse information was present in some reports. GAO has long reported that it is problematic to equate the quality of investigations with the percentage of investigations that are returned by requesting agencies due to incomplete case files. For example, in October 1999 and again in our November 2005 evaluation of the governmentwide plan, we stated that the number of investigations returned for rework is not by itself a valid indicator of quality because adjudication officials said they were reluctant to return incomplete investigations in anticipation of further delays. We additionally suggested that regardless of whether this metric continues to be used, the government might want to consider adding other indicators of the quality of investigations, such as the number of counterintelligence leads generated from security clearance investigations and forwarded to relevant units. Further, our September 2006 report recommended that OMB’s Deputy Director of Management require OPM and DOD to (1) submit their procedures for eliminating the deficiencies that we identified in their investigative and adjudicative documentation and (2) develop and report metrics on completeness and other measures of quality that will address the effectiveness of the new procedures. We believe that our recommendation still has merit, but the previously cited passage from the February 2007 OMB report does not describe the new procedures or provide statistics for the recommended new quality measures and the 2008 OMB report is silent on quality measures. As we noted in September 2006, the government cannot afford to achieve its timeliness goal by providing investigative and adjudicative reports that are incomplete in key areas required by federal investigative standards and adjudicative guidelines. Incomplete investigations and adjudications undermine the government’s efforts to move toward greater clearance reciprocity. An interagency working group, the Security Clearance Oversight Steering Committee, noted that agencies are reluctant to be accountable for poor quality investigations and/or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance-granting agencies need to have confidence in the quality of the clearance process. Without full documentation of investigative actions, information obtained, and adjudicative decisions, agencies could continue to require duplicative investigations and adjudications. Earlier, we stated that reciprocity concerns continue to exist, citing FBI and DHS reluctance to accept clearances issued by other agencies when providing information to personnel in fusion centers. Much of the recent quantitative information provided on clearances has dealt with how much time it takes for the end-to-end processing of clearances (and related measures such as the numbers of various types of investigative and adjudicative reports generated); however, there is less quantitative information on other aspects of the clearance process. In our November 2005 testimony, we noted that the earlier government plan to improve the clearance process provided many metrics to monitor the timeliness of clearances governmentwide, but that plan detailed few of the other elements that a comprehensive strategic plan might contain. A similar emphasis on timeliness appears to be emerging for the future governmentwide clearance process. In the Director of National Intelligence’s 500 Day Plan for Integration and Collaboration issued on October 10, 2007, the core initiative to modernize the security clearance process had only one type of metric listed under the heading about how success will be gauged. Specifically, the plan calls for measuring whether “performance of IC agency personnel security programs meet or exceed IRTPA guidelines for clearance case processing times.” While the February 2007 and 2008 OMB reports to Congress contain statistics and other information in addition to timeliness metrics (e.g., use of information technology and reciprocity-related procedures) and the joint team developing the new clearance process may be considering a wider range of metrics than timeliness only, an underlying factor in the emphasis on timeliness is IRTPA. Among other things, IRTPA established specific timeliness guidelines to be phased in over 5 years. The Act also states that, in the initial period which ends in 2009, each authorized adjudicative agency shall make a determination on at least 80 percent of all applications for personnel security clearance within an average of 120 days after the receipt of the application for a security clearance by an authorized investigative agency. The 120-day average period shall include a period of not longer than 90 days to complete the investigative phase of the clearance review and a period of not longer than 30 days to complete the adjudicative phase of the clearance review. Moreover, IRTPA also includes a requirement for a designated agency (currently OMB) to provide information on among other things the timeliness in annual reports through 2011, as OMB did in February 2008. Prior GAO reports as well as inspector general reports identify a wide variety of methods and metrics that program evaluators have used to examine clearance processes and programs. For example our 1999 report on security clearance investigations used multiple methods to examine numerous issues that included: documentation missing from investigative reports; the training of investigators (courses, course content, and number of trainees); investigators’ perceptions about the process; customer perceptions about the investigations; and internal controls to protect against fraud, waste, abuse, and mismanagement. Including these and other types of metrics in regular monitoring of clearance processes could add value in current and future reform efforts as well as supply better information for greater congressional oversight. The joint Security Clearance Process Reform team may also want to consider providing Congress with the long-term funding requirements to implement changes to security clearance processes enabling more informed congressional oversight. In a recent report to Congress, DOD provided funding requirements information that described its immediate needs for its industry personnel security program, but it did not include information about the program’s long-term funding needs. Specifically, DOD’s August 2007 congressionally mandated report on clearances for industry personnel provided less than 2 years of data on funding requirements. In its report, DOD identified its immediate needs by submitting an annualized projected cost of $178.2 million for fiscal year 2007 and a projected funding need of approximately $300 million for fiscal year 2008. However, the report did not include information on (1) the funding requirements for fiscal year 2009 and beyond even though the survey used to develop the funding requirements asked contractors about their clearance needs through 2010 and (2) the tens of millions of dollars that the Defense Security Service Director testified before Congress in May 2007 were necessary to maintain the infrastructure supporting the industry personnel security clearance program. As noted in our February 2008 report, the inclusion of less than 2 future years of budgeting information in the DOD report limits Congress’s ability to carry out its oversight and appropriations functions pertaining to industry personnel security clearances. Without more information on DOD’s longer-term funding requirements for industry personnel security clearances, Congress lacks the visibility it needs to fully assess appropriations requirements. In addition, the long-term funding requirements to implement changes to security clearance processes are also needed to enable the executive branch to compare and prioritize alternative proposals for reforming the clearance processes. As the joint Security Clearance Process Reform team considers changes to the current clearance processes, it may also want to consider ensuring that Congress is provided with the long-term funding requirements necessary to implement any such reforms. We were encouraged when OMB undertook the development of an earlier governmentwide plan for improving the personnel security clearance process and have documented in our prior reports both DOD and governmentwide progress in addressing clearance-related problems. Similarly, the current joint effort to develop a new governmentwide end- to-end security clearance system represents a positive step to address past impediments and manage security clearance reform efforts. Still, much remains to be done before a new system can be designed and implemented. GAO’s experience in evaluating DOD’s and governmentwide clearance plans and programs as well as its experience monitoring large- scale, complex acquisition programs could help Congress in its oversight, insight, and foresight regarding security clearance reform efforts. Madam Chairwoman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Jack E. Edwards, Acting Director; James P. Klein, Joanne Landesman, Charles Perdue, Karen D. Thornton, and Stephen K. Woods. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Defense Business Transformation: A Full-time Chief Management Officer with a Term Appointment Is Needed at DOD to Maintain Continuity of Effort and Achieve Sustainable Success. GAO-08-132T. Washington, D.C.: October 16, 2007. DOD Personnel Clearances: Delays and Inadequate Documentation Found For Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. High-Risk Series: An Update, GAO-07-310 (Washington, D.C.: January 2007). DOD Personnel Clearances: Additional OMB Actions Are Needed To Improve The Security Clearance Process, GAO-06-1070. Washington, D.C.: September 2006. Managing Sensitive Information: DOD Can More Effectively Reduce the Risk of Classification Errors, GAO-06-706. Washington, D.C.: June 30, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06-71SU. Washington, D.C.: November 4, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2004, Congress passed the Intelligence Reform and Terrorism Prevention Act to reform security clearance processes. Much of GAO's experience in evaluating personnel security clearance processes over the decades has consisted of examining the Department of Defense's (DOD) program, which maintains about 2.5 million clearances on servicemembers, DOD civilian employees, legislative branch employees, and industry personnel working for DOD and 23 other federal agencies. Long-standing delays in processing applications--and other problems in DOD's clearance program--led GAO to designate it a high-risk area in 2005. GAO also has documented clearance-related problems in other agencies. For this hearing, GAO was asked to identify key factors that could be applied in personnel security clearance reform efforts. To identify key factors, GAO drew upon its past reports and institutional knowledge. For those reports, GAO reviewed laws, executive orders, policies, reports, and other documentation related to the security clearance process; examined samples of cases of personnel granted top secret eligibility; compared documentation in those sampled cases against federal standards; and interviewed a range of cognizant government officials. Current and future efforts to reform personnel security clearance processes should consider, among other things, the following four key factors: determining whether clearances are required for positions, incorporating quality control steps throughout the clearance processes, establishing metrics for assessing all aspects of clearance processes, and providing Congress with the long-term funding requirements of security clearance reform. Requesting a clearance for a position in which it will not be needed, or in which a lower- level clearance would be sufficient, will increase both costs and investigative workload unnecessarily. For example, changing the clearance needed for a position from a secret to top secret increases the investigative workload for that position about 20-fold and uses 10 times as many investigative staff hours. Emphasis on quality in clearance processes could promote positive outcomes, including more reciprocity among agencies in accepting each others' clearances. Building quality throughout clearance processes is important, but government agencies have paid little attention to quality, despite GAO's repeated suggestions to place more emphasis on quality. Even though GAO identified the government's primary metric for assessing quality--the percentage of investigative reports returned for insufficiency during the adjudicative phase--as inadequate by itself in 1999, the Office of Management and Budget and the Office of Personnel Management continue to use that metric. Concerns about the quality of investigative and adjudicative work underlie the continued reluctance of agencies to accept clearances issued by other agencies; as a result, government resources are used to conduct duplicative investigations and adjudications. Many efforts to monitor clearance processes emphasize measuring timeliness, but additional metrics could provide a fuller picture of clearance processes. The emphasis on timeliness is due in part to recent legislation that provides specific guidelines regarding the speed with which clearances should be completed and requires annual reporting of that information to Congress. GAO has highlighted a variety of metrics in its reports (e.g., completeness of investigative and adjudicative reports, staff's and customers' perceptions of the processes, and the adequacy of internal controls), all of which could add value in monitoring clearance processes and provide better information to allow improved oversight by Congress and the Executive Branch. Another factor to consider in reform efforts is providing Congress with the long-term funding requirements to implement changes to security clearance processes. DOD's August 2007 congressionally mandated report on industry clearances identified its immediate funding needs but did not include information on the funding requirements for fiscal year 2009 and beyond. The inclusion of less than 2 future years of budgeting data in the DOD report limits Congress's ability to carry out its long-term oversight and appropriations functions pertaining to industry personnel security clearances.
IHS, an operating division of HHS, is responsible for providing health services to federally recognized tribes of American Indians and Alaska natives. In 2007, IHS provided health services to approximately 1.9 million American Indians and Alaska natives from more than 562 federally recognized tribes. As an operating division of HHS, IHS is included in the agency’s consolidated financial statement and has not been audited independently since 2002. IHS is divided into 12 regions and operates 163 service units throughout the country. Service units may contain one or more health facilities, including hospitals, health centers, village clinics, health stations, and school health centers. There are 114 IHS-operated health facilities and 565 tribally operated health facilities. The IHS budget appropriation in 2007 was $3.2 billion, approximately 54 percent of which was administered by tribes through various contracts and compacts with the federal government. We substantiated the allegation of gross mismanagement of property at IHS. Specifically, we found that thousands of computers and other property, worth millions of dollars, have been lost or stolen. We analyzed IHS reports for headquarters and the 12 regions from the last 4 fiscal years which identified over 5,000 property items, worth about $15.8 million, that were lost or stolen from IHS headquarters and field offices throughout the country. The number and dollar value of this missing property is likely much higher because IHS did not conduct full inventories of accountable property for all of its locations and did not provide us with all inventory documents as requested. Despite IHS attempts to obstruct our investigation, our full physical inventory at headquarters and our random sample of property at seven field locations identified millions of dollars of missing property. We also found that IHS has made wasteful purchases over the past few years. For example, IHS has bought computer equipment that is currently unused in its original box and has issued IT equipment to its employees that duplicate the equipment already provided to them. Our analysis of Report of Survey records from IHS headquarters and field offices shows that from fiscal year 2004 through fiscal year 2007, IHS property managers identified over 5,000 lost or stolen property items worth about $15.8 million. Although we did receive some documentation from IHS, the number and dollar value of items that have been lost or stolen since 2004 is likely much higher for the following reasons. First, IHS does not consistently document lost or stolen property items. For example, 9 of the 12 IHS regional offices did not even perform a physical inventory in fiscal year 2007. Second, for each year since fiscal year 2004, an average of 5 of the 12 regions did not provide us with all of the reports used to document missing property since fiscal year 2004, as we requested. The following cases provide information on five of the egregious examples of lost and stolen property we identified. In each case, IHS has not held any staff accountable for the missing items. In some of the cases, IHS did not even perform an investigation to try and locate the missing items or determine what actions should be taken. IHS staff held a “yard sale” of 17 computers and other property worth $16,660 in Schurz, Nevada, between June and July 2005. According to an IHS property manager, the equipment was advertised to the public via fliers indicating that excess federal property was to be given away for free. To date, IHS has not completed the investigation or held any IHS personnel responsible and, according to a 2006 report, intends to writeoff the missing equipment. According to the Phoenix area property manager, the 17 computers identified as missing were transferred from a youth patient center and could contain sensitive youth patient information because the computers were never “cleaned” before being transferred to the Schurz service unit. We are referring this potential release of patient data to the HHS OIG for further investigation. From 1999 through 2005, IHS did not follow required procedures to document the transfer of property from IHS to the Alaska Native Tribal Health Consortium, resulting in an unsuccessful 5-year attempt by IHS to reconcile the inventory. Our analysis of IHS documentation revealed that about $6 million of this property—including all-terrain vehicles, generators, van trailers, tractors, and other heavy equipment—was lost or stolen. In April 2007, a desktop computer containing a database of uranium miner names, social security numbers, and medical histories was stolen from an IHS hospital in New Mexico. According to an HHS report, IHS attempted to notify the 849 miners whose personal information was compromised, but IHS did not issue a press release to inform the public of the compromised data. In addition to this incident, the IHS Finance department recently reported a missing Personal Digital Assistant (PDA) in March 2008 when they requested a replacement. The PDA contained medical information and names of patients at a Tucson Area Hospital. According to the IHS IT official, the device contained no password or data encryption. This was in violation of federal policy and increased the risk that sensitive information could be disclosed to unauthorized individuals. Both of these cases have already been reported to HHS by the IHS Office of Information Technology. In September 2006, IHS property staff in Tucson attempted to write off over $275,000 worth of property, including Jaws of Life equipment valued at $21,000. The acting area director in Tucson refused to approve the write-off because of the egregious nature of the property loss. However, no investigation has been conducted to date. According to an IHS June 2006 report, a $4,000 Apple Powerbook laptop was stolen from an employee’s vehicle in the Navajo area. Despite the lack of authorization, the employee took the laptop for use during off-duty hours—in violation of IHS policy. Because the employee violated IHS policy, IHS’s initial determination, with which the employee agreed, was that the employee was responsible for the loss and therefore should reimburse the federal government for the value of the stolen computer. However, the IHS approving official reversed the initial determination decision stating that the employee had since resigned and the loss was due to theft. To substantiate the whistleblower’s allegation of missing IT equipment, we performed our own full inventory of IT equipment at IHS headquarters. Our results were consistent with what the whistleblower claimed. Specifically, of the 3,155 pieces of IT equipment recorded in the records for IHS headquarters, we determined that about 1,140 items (or about 36 percent) were lost, stolen, or unaccounted for. These items, valued at around $2 million, included computers, computer servers, video projectors, and digital cameras. According to IHS records, 64 of the items we identified as missing during our physical inventory were “new” in April 2007. Furthermore, we found that some of the missing computers were assigned to the IHS human resources division. These computers likely contained sensitive employee data including names and Social Security numbers protected under the Privacy Act of 1974. We are referring these cases where there was a potential release of sensitive data including employee social security numbers to the HHS OIG for further investigation. During our investigation of the whistleblower’s complaint, IHS made a concerted effort to obstruct our work. IHS officials made misrepresentations and fabricated documents to impede our work. Specifically, The IHS Director responsible for property claimed that IHS was able to find about 800 of the missing items from the whistleblower’s complaint. However, based on our physical inventory testing at headquarters, we found that this statement was a misrepresentation and that only some of these items have been found. An IHS property specialist attempted to provide documentation confirming that 571 missing items were properly disposed of by IHS. However, we found that the documentation he provided was not dated and contained no signatures. When we questioned the official about these discrepancies, he admitted that he fabricated the documents. We are referring this individual to the HHS OIG for further investigation. According to IHS policy, receiving reports are always signed by an authorized employee. As part of our inventory, we requested receiving reports for three recent purchase orders. For one purchase order, IHS was not able to provide us with any receiving reports. For the other two purchase orders, IHS provided us with receiving reports that were not properly completed; e.g., the reports were not signed by the person who received the property and did not contain the date that the property was received. When we questioned these discrepancies, IHS sent us “new” receiving reports for the three purchase orders, but all of them contained questionable dates and signatures. For example, figure 1 shows the fabricated receiving report for a shipment of new scanners delivered to IHS. ? As shown in figure 1, there is almost a 3-month gap between the date the equipment was received in September and the date that the receiving report was completed and signed in December—-even though the document should have been signed upon receipt. In fact, the new receiving report IHS provided was signed on the same date we requested it, strongly suggesting that IHS fabricated these documents in order to obstruct our investigation. Further, after testing one of the other two fabricated receiving reports, we found that 10 brand new desktop computers worth almost $12,000 could not be located even though the receiving report indicated that they were “received” in July 2007. We selected a random sample of IT equipment inventory at seven IHS field offices to determine whether the lack of accountability for inventory was confined to headquarters or occurred elsewhere within the agency. Similar to our finding at IHS headquarters, our sample results also indicate that a substantial number of pieces of IT equipment were lost, stolen, or unaccounted for. Specifically, we estimate that for the seven locations, about 1,200 equipment items, with a value of $2.6 million were lost or stolen. As shown in table 1, our estimates are based on a statistical sample of 250 items from a population of 7,211 IT equipment items worth over $19 million recorded in property records for IT equipment at the seven field office locations. Of the 250 items that we sampled, IHS could not locate or substantiate the disposal of 42 items, or about 17 percent of the sample population. Furthermore, some of the missing equipment from the seven field locations could have contained sensitive information. Although personal health information requires additional protections from unauthorized release under the Health Information Portability and Accountability Act of 1996 (HIPAA) and implementing regulations, we found that many of the missing laptops were assigned to IHS hospitals and, therefore, could have contained patient information, social security numbers, and other personal information. We are referring these cases where there was a potential release of sensitive data including patient information to the HHS OIG for further investigation. IHS has also exhibited ineffective management over the procurement of IT equipment, which has led to wasteful spending of taxpayer funds. IHS purchased excessive amounts of IT equipment for its staff, most notably at the headquarters office. An IHS official stated that IHS purchased new computers using “end of the year dollars.” Some examples of wasteful spending that we observed during our audit of headquarters and field offices include the following: Approximately 10 pieces of IT equipment, on average, are issued for every one employee at IHS headquarters. Although some of these may be older items that were not properly disposed, we did find that many employees, including administrative assistants, were assigned two computer monitors, a printer and scanner, a blackberry, subwoofer speakers, and multiple computer laptops in addition to their computer desktop. Many of these employees said they rarely used all of this equipment and some could not even remember the passwords for some of their multiple laptops. IHS purchased computers for headquarters staff in excess of expected need. For example, IHS purchased 134 new computer desktops and monitors for $161,700 in the summer of 2007. As shown in figure 2, as of February 2008 25 of these computers and monitors—valued at about $30,000—were in storage at IHS headquarters. An IT specialist stated that the computers and monitors were “extras.” In addition, we identified 7 new laptops that were stored in an unlocked cabinet at headquarters and never used. Computers and other IT equipment were often assigned to vacant offices. For example, many of the computer desktops and monitors purchased in the summer of 2007 for IHS headquarters were assigned to vacant offices. In addition, as shown in figure 3, we found two computers, two monitors, and three printers in an employee’s office at the Albuquerque field location we visited. The IHS area property manager stated that this equipment was issued to an employee who spends a majority of his time on travel to training and treatment centers. An official for the IHS National Program stated that IHS purchased new computers using “end of the year dollars.” For example, as shown in figure 4, one field office employee in Gallup, New Mexico had an unwrapped, 23- inch, widescreen monitor worth almost $1,700 in her office. The employee stated that she did not know why IT sent her the monitor and she claimed that it has never been used. The lost or stolen property and waste we detected at IHS can be attributed to the agency’s weak internal control environment and its ineffective implementation of numerous property policies. In particular, IHS management has failed to establish a strong “tone at the top” by allowing inadequate accountability over property to persist for years and by neglecting to fully investigate cases related to lost and stolen items. Furthermore, IHS management has not properly updated its personal property management policies, which IHS has not revised since 1992. Moreover, IHS did not (1) conduct annual inventories of accountable property; (2) use receiving agents for acquired property at each location and designate property custodial officers in writing to be responsible for the proper use, maintenance, and protection of property; (3) place bar codes on accountable property to identify it as government property; (4) maintain proper individual user-level accountability, including custody receipts, for issued property; (5) safeguard IT equipment; or (6) record certain property in its property management information system (PMIS). Weak tone at the top: The importance of the “tone at the top” or the role of management in establishing a positive internal control environment cannot be overstated. GAO’s internal control standards state that “management plays a key role in demonstrating and maintaining an organization’s integrity and ethical values, especially in setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate.” However, IHS management has failed to establish and maintain these ethical values. As far back as 1997, an IHS memo by the then Acting Director stated that the agency had problems with lost and stolen property at IHS headquarters. The memo also stated that unused equipment was not safeguarded against loss or theft. However, we found little corrective action was taken by IHS. For example, management failed to update IHS personal property management policies, which have not been revised since 1992. In addition, IHS has historically shown little motivation to hold its employees liable for missing property. Instead of investigating the circumstances surrounding missing property, IHS writes off the losses without holding anyone accountable. As a result, an IHS property official admitted to us that there is no accountability over IHS property. For example, figure 5 shows a report used to write off almost $900,000 worth of missing IT equipment in 2004, including laptop and desktop computers, servers, cameras, routers, and fax machines. This is just one of four reports that IHS used in 2004 to write off a combined total of $1.8 million dollars worth of IT equipment. As shown in the figure, the report does not hold anyone responsible for the missing inventory, but it does call for the improvement of controls over property management. However, as shown by our audit and related investigations, IHS has made minimal efforts to improve property management and oversight. Despite this fact, IHS rewarded the individuals responsible for these functions in its property group with about $40,000 in merit awards from 2003 through 2007. No annual inventories: HHS and IHS policies require IHS personnel to conduct annual inventories of accountable personal property, including property at headquarters and in field offices. However, IHS headquarters did not conduct any annual inventories from fiscal years 2004 through 2006. In addition, property managers were not able to accurately document the findings of their fiscal year 2007 inventory nearly a year after it was conducted. Moreover, in fiscal year 2007, only 3 out of 12 regions conducted a full physical inventory. Consequently, the extent of missing property at IHS is unknown. Failure to use receiving agents and to designate property custodial officers: IHS policy requires that each accountable area designate at least one receiving agent to receive purchased property. The receiving agent is responsible for documenting the receipt of the property (i.e., receiving report) and then distributing the property to its intended user. However, we found that acquired property is often sent directly to the user, bypassing the receiving agent. For example, the IT department sometimes receives new computers and IT equipment directly instead of utilizing the receiving agent. In addition, HHS requires the designation of property custodial officers in writing to be responsible for the proper use, maintenance, and protection of property. However, an IHS official said that property custodial officers have not formally been designated for headquarters because of high staff turnover. Lack of property bar codes: HHS and IHS policy mandate that all accountable property have a bar code identifying it as government property. However, in our audit of IHS headquarters inventory, we identified over 100 pieces of IT equipment, including blackberries and digital cameras, that were not properly bar coded. Much of this equipment likely did not receive a bar code because, as discussed earlier, IHS does not receive property in a central location. Lack of personal custody property records: HHS requires the use of hand receipts, known as HHS Form 439, any time property is issued to an employee. This form should be retained by a property official so that property can be tracked at the time of transfer, separation, change in duties, or when requested by the proper authority. By signing this form, an IHS employee takes responsibility for the government-issued equipment. According to an IHS property official, IHS headquarters does not use the HHS Form 439, nor do they use any other type of hand receipt. Officials from several IHS regions stated that they use the form only in limited cases. Without the issuance of this form, there is no documentation as to where the equipment is located and no mechanism to hold the user accountable for the equipment. Lack of user-level accountability: HHS requires IHS to document information on the user of equipment, including building and room number, so that property can be tracked and located. However, IHS did not properly maintain this information. Property personnel instead relied on their personal recollection to locate property items. For example, on several occasions during our headquarters inventory, IHS property staff could not identify the property user. As a result, the property staff had to make inquiries with other staff to obtain information on the user of the equipment. Further, IHS personnel in the field offices stated that it took them several days to locate items that were included in our sampled inventory. Furthermore, according to the IHS policy manual, when equipment is no longer needed by the user, a request for property action should be submitted in writing to the Property Accountable Officer (PAO). The PAO then determines if the item can be transferred to another user within IHS. However, in many cases, equipment is redistributed by the IT department or sent to another user without PAO approval. In our audit of IHS headquarters inventory, we found some items that were issued to an unspecified user or to employees who had retired or left the agency. To locate these items, IHS Headquarters staff had to inquire with the employee’s colleagues to determine the location of the equipment. In several cases, IHS was not able to locate the equipment assigned to separated employees, raising the possibility that the equipment was stolen. For example, one IHS employee stated that equipment had “disappeared” from an office vacated by a former employee. Weaknesses in physical security of IT equipment: According to the Indian Health Manual, property is to be adequately protected “against the hazards of fire, theft, vandalism, and weather commensurate with the condition and value” of the property. However, during our inventory review at both IHS headquarters and field office locations, we found that IHS did not follow this policy. Specifically, we found that IHS did not properly secure expensive IT equipment leaving them vulnerable to loss and theft. For example we found that: Surplus IT equipment that should have been disposed of was stored in unlocked employees’ offices, suite areas, conference rooms, and storage rooms. For example, figure 6 shows computer equipment stored in an unlocked multipurpose storage room at IHS headquarters. In addition, an IHS headquarters employee had newly purchased unsecured equipment, including a large flat screen TV, dual monitors, a printer, a scanner, a desktop, a subwoofer, a video camera, and a back-up power supply. IHS did not establish proper safeguards for storing IT equipment in IHS facilities or employees’ offices. For example, at one of the IHS hospitals we visited, the IT department did not lock its storage area, leaving several computers unsecured. Because equipment was not protected against damage or destruction, IHS had to dispose over $700,000 worth of equipment because it was “infested with bat dung.” Failure to use accountable property management system: HHS policy requires that all accountable property with a value of $5,000 or greater and all sensitive items with a value of $500 or greater be tracked by the PMIS property management system. The PMIS system is intended to improve accountability and standardize property records across HHS. Equipment that is not recorded in PMIS is not inventoried or otherwise controlled, placing it at increased risk of loss or theft. Although IHS had 2 years to migrate from legacy systems to the new inventory system, it has not yet fully converted to the PMIS system. Furthermore, officials from two field locations stated that they are not adding new equipment to the system because IHS headquarters told them not to use the system until further notice. Because it has not entered all property information into PMIS, IHS does not have reliable inventory records related to expensive, sensitive, and pilferable property. Specifically, IHS has failed to enter over 18,000 items, worth approximately $48 million, from headquarters and the sites we reviewed. Furthermore, we found that over half of the items we selected while performing our random sample testing of the seven field locations were not recorded in PMIS. The types of equipment that were not entered into PMIS include a $145,000 ultrasound unit, a $140,000 X-ray unit, and a $61,000 anesthesia machine. In addition, although items such as blackberries, cell phones, and digital cameras do not meet the criteria for inclusion in PMIS, these items are highly sensitive and should be accounted for by IHS. Furthermore, the magnitude of equipment that was not entered into the system is likely much higher because we did not analyze data from IHS locations not included in our statistical sample. Our audit confirmed the whisteblower’s allegation of gross mismanagement of property at IHS. IHS has exhibited a weak control environment and disregard for basic accountability over its inventory. As a result, IHS cannot account for its physical property and is vulnerable to the loss and theft of IT equipment and sensitive personal data. Further, IHS’ wasteful spending of IT equipment and lack of discipline or personal accountability for lost and stolen property and personal data has set a negative tone at the top that the status quo is acceptable. Moreover, intentional attempts of some IHS employees to thwart our investigation lead us to question the integrity and transparency of certain functions within the agency’s property management group and call for stronger leadership to strengthen tone at the top as well as throughout property management functions. We recommend that the Director of IHS strengthen IHS’s overall control environment and “tone at the top” by updating and enforcing its policies and procedures for property management. As part of this effort, the Director of IHS should direct IHS property officials to take the following 10 actions: Update IHS personal property management policies to reflect any policy changes that have occurred since the last update in 1992. Investigate circumstances surrounding missing or stolen property instead of writing off losses without holding anyone accountable. Enforce policy to conduct annual inventories of accountable personal property at headquarters and all field locations. Enforce policy to use receiving agents to document the receipt of property and distribute the property to its intended user and to designate property custodial officers in writing to be responsible for the proper use, maintenance, and protection of property. Enforce policy to place bar codes on all accountable property. Enforce policy to document the issuance of property using hand receipts and make sure that employees account for property at the time of transfer, separation, change in duties, or on demand by the proper authority. Maintain information on users of all accountable property, including their buildings and room numbers, so that property can easily be located. Physically secure and protect property to guard against loss and theft of equipment. Enforce the use of the PMIS property management database to create reliable inventory records. Establish procedures to track all sensitive equipment such as blackberries and cell phones even if they fall under the accountable dollar threshold criteria. We received written comments on a draft of this report from the Assistant Secretary for Legislation of the Department of Health and Human Services (HHS). HHS agreed with 9 of our 10 recommendations. However, HHS stated that our report contained inaccuracies and misinterpretations that it believes seriously weaken our conclusions. In its response to our draft report HHS cited three limitations. First, HHS stated that our report did not appreciate the fact that IHS property management is a unique system in its collaboration with Indian Tribes and that it operates its service units throughout the country. Second, HHS said that unaccountable property may be lower than what our report identified because the ongoing process of reconciling the prior system to the new system makes it more likely that the number of currently unaccounted for property items will be reduced rather than increase as the reconciliation progresses. Further, they state that the implementation process for the new system made it more difficult for IHS to provide GAO with the necessary documentation for audit. Third, HHS also stated that we overstated the net worth of unaccounted for items by not taking into account the depreciated value of those items. In addition, HHS response also cited six specific cases that they believe were misrepresented in our case studies. In response to HHS’s first limitation, we do not believe that we mischaracterized the uniqueness of IHS’s collaboration with Indian Tribes and the fact that it has service units throughout the country. In the report, we state that over half of IHS’s budget is administered by the tribes through various contracts and compacts with the federal government. We also state that IHS operates 163 service units that include one or more health facilities, such as hospitals, health centers, village clinics, health stations, and school health centers. Furthermore, the scope of our audit only included testing IHS property, which does not include the Tribal communities. However, we believe that because IHS operates in this type of control environment, IHS should have strong internal controls over its property and not the weak controls that were apparent in our audit. HHS also contends that the unaccountable property will be reduced from the reconciliation of the prior property system to the new system. However, we disagree—the lost or stolen property that was identified in our report came from IHS’s Report of Surveys, our full physical inventory of all equipment at IHS headquarters, and random sample testing of IT equipment at 7 field locations. Reports of Survey only identify specific property items that were written off IHS’s inventory books from physical inventories or other circumstances. Our physical inventory testing at IHS headquarters and random sample testing of IT equipment at the 7 field locations verify that there were additional missing property items to those identified in Reports of Survey. Furthermore, as stated in our report and HHS’s response, IHS did not perform complete physical inventories of equipment for most of its regional offices. Specifically we identified that 9 of the 12 regions did not perform a physical inventory in 2007. In addition, we reported that IHS did not complete the investigations of about $11 million of inventory shortages where a physical inventory was performed. As such, our estimate does not include lost or stolen property where physical inventories were not performed or where IHS did not complete its investigation of inventory shortages. Further, we do not believe that IHS’s conversion to a new system should impact IHS’s ability to maintain basic inventory documentation that is subject to audit. Without such documentation, IHS has no accountability of equipment that the American taxpayers entrusted to the agency. Thus, we believe that we likely underestimated, not overestimated, the amount of lost or stolen property. Finally, in its written response to our draft report HHS states its belief that our report overstates the net worth of unaccounted for items by not taking into consideration the depreciation value of these items. While we agree that the actual “loss” is less because of depreciation, we consider acquisition cost very relevant because, if property that IHS has lost or is stolen was necessary, IHS will need to buy new replacement property. It is likely that replacement costs are as much, or more, than acquisition costs in this scenario. Furthermore, in our use of acquisition costs for property, IHS generally provided us the acquisition cost of equipment. IHS provided us little data that contained depreciation or fair market value of the equipment. Therefore, we modified our report to state that the value of lost or stolen property was represented as the acquisition cost. We disagree with HHS’s portrayal of the six specific cases cited in their response to our draft report. Specifically: Report of survey for Alaska tribal self-determination award: In its response, HHS stated that most of the $6 million that was written off in the Report of Survey was transferred from IHS to local Tribal communities, the U.S. Air Force, or abandoned on an IHS construction site. As stated in our report, none of these transfers or disposals were properly documented. Without proper documentation, it is impossible to determine what happened with the property, which is why we consider it to be lost or stolen. Although HHS’s comments state that these items were old and had little remaining useful value, IHS continues to purchase new property to replace old, necessary items—in which case it is likely that replacement costs are as much (or more) than acquisition cost. Furthermore, analysis of IHS’s response raises concerns about the nature of disposal for these items, including vehicles and machinery, which could cause environmental hazards as a result of abandonment. Tucson Report of Survey and “jaws of life”: HHS stated that 45 items, including the “jaws of life” equipment that we reported as lost or stolen in our draft report, have been recently found. We identified that these items were lost or stolen because they were documented in a September 2006 Report of Survey. We followed up on the status of these property items on our site visits to Tucson on two occasions in late 2007 and early 2008. On both occasions, IHS confirmed that these items had not been found and that an investigation into their loss had not been performed. Based on this timeline, these items were lost for almost 2 years. IHS has not provided us any documentation to substantiate the location of the jaws of life or any other property identified in the Tucson Report of Survey. Therefore, we cannot validate that these items were found. Allegation of misrepresentation by IHS property staff: HHS stated that the majority of the 1,180 items that were not accounted for in the April 2007 inventory had been located and reconciled by January 2008. Additionally in our report, we state that the IHS Director responsible for property claimed that IHS was able to find about 800 of these missing items. However, based on our physical inventory testing at headquarters, which included verifying IHS’s reconciled items in January 2008, we found that only some of these items have been found. We also identified items missing from IHS’s April 2007 inventory in addition to the 1,180 shortage identified by IHS. Specifically, of the 3,155 pieces of IT equipment recorded in the records for IHS headquarters, we determined that about 1,140 items (or about 36 percent) were lost, stolen, or unaccounted for. Part of the discrepancy can be attributed to the fact that we did not accept fabricated documents that the IHS property management specialist provided us as discussed below. We continue to believe that the IHS Director responsible for property attempted to thwart our investigation through misrepresentations. Allegation of fabricated documents: HHS stated that IHS generated disposal records in January 2008 to “establish an audit trail” showing that 571 items missing during our inventory work were disposed of properly. However, when these documents were presented to us, they were identified as the actual supporting documents, not an “audit trail.” Additionally, HHS fails to acknowledge that the disposal records were not dated and contained no signatures approving of the disposal. Because these records clearly did not meet evidence standards, we asked the IHS property employee who gave us the documents about their origin. He admitted to fabricating them in order to satisfy our request for the disposition of the property. By focusing on the January 2008 date of our request, HHS is missing the point of our finding—that an IHS employee tried to make the missing property properly accounted for by generating documents and representing them as authentic disposal records. We have referred the matter to the HHS Office of Inspector General for further investigation. Allegation of wasteful purchases: HHS stated that it initiated a procurement strategy to increase the cost efficiency of replacing computer technology for its employees by buying in bulk so it can take advantage of pricing discounts and reduce the critical down time for IT tools. It also stated that the 25 on-hand “spare computers” noted in the report were an acceptable level of inventory. We agree that outdated technology should be replaced by taking advantage of bulk purchases. We also agree that there should be some inventory held in reserve for emergency needs that arise during the year. However, as stated in the report, we found that there were 3 computers for every person at IHS headquarters—a ratio that bulk ordering policies do not adequately explain. In addition to the 25 new and unused computers cited by HHS in its response, we identified several other examples of waste at IHS headquarters including computer equipment items issued to vacant offices and 7 new and unused laptops stored in an unlocked cabinet. We also noted examples of waste at the field locations, such as an unwrapped, 23-inch, widescreen monitor worth almost $1,700. The employee in possession of the monitor stated she did not know why IT sent her the monitor and claimed that it had never been used. We believe that such examples exemplify wasteful purchases of equipment rather than a prudent procurement strategy. Yard Sale: HHS stated that IHS headquarters staff have no knowledge of a “yard sale” of computers and other property in Nevada. We reported on this “yard sale” based on the confirmation of eight IHS property officials, including the Phoenix Area executive officer. In its response, HHS claimed that the 17 computers sold at this “yard sale” were used for educational purposes and thus likely did not contain sensitive information. The computers were located at a Youth Wellness Center and, according to the Phoenix area property manager, were never “cleaned” before transfer outside of the center. Hence, we continue to believe that the potential release of patient data and the obvious impropriety of holding a “yard sale” for government equipment make it prudent for the HHS OIG to investigate the matter. Finally, HHS disagreed with our recommendation to establish procedures to track all sensitive equipment such as blackberries and cell phones even if they fall under the accountable dollar threshold criteria. We made this recommendation because we identified examples of lost or stolen equipment that contained sensitive data, such as a PDA containing medical data for patients at a Tucson, Arizona area hospital. According to an IHS official, the device contained no password or data encryption, meaning that anyone who found (or stole) the PDA could have accessed the sensitive medical data. While we recognize that IHS may have taken steps to prevent the unauthorized release of sensitive data and acknowledge that it is not required to track devices under a certain dollar threshold, we are concerned about the potential harm to the public caused by the loss or theft of this type of equipment. Therefore, we continue to believe that such equipment should be tracked and that our recommendation remains valid. As agreed with your office, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of the Department of Health and Human Services, the Director of IHS, and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact either Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To substantiate the allegation of lost or stolen property and wasteful spending at the Indian Health Service (IHS), we analyzed IHS documents of lost or stolen property from fiscal year 2004 through fiscal year 2007. We also conducted a full physical inventory of property at IHS headquarters and statistically tested information technology (IT) equipment inventory at seven selected IHS field locations. To identify specific cases of lost or stolen property and wasteful spending, we analyzed IHS documents and made observations during our physical inventory and statistical tests. We performed a full physical inventory at IHS headquarters because the whistleblower specifically identified problems at that location. Specifically, we tested all 3,155 headquarters property items which were largely comprised of IT equipment that IHS had recorded in its property records as of April 2007. We physically observed each item and its related IHS-issued bar code and verified that the serial number related to the bar code was consistent with IHS’s property records. Although IHS property in the field locations includes inventory items such as medical equipment and heavy machinery, we performed a statistical test of only IT equipment inventory at seven IHS field locations to determine whether the lack of accountability for inventory was pervasive at other locations in the agency. We limited our scope to testing only IT equipment items which are highly pilferable and can be easily converted to personal use such as laptops, desktop computers, and digital cameras. We selected the seven field locations based on book value of inventory and geographic proximity. We selected five field office locations because they had the highest dollar amount of IT equipment according to IHS’s property records. We selected the two additional sites based on their geographic proximity to the other field locations being tested. Our findings at these seven locations cannot be generalized to IHS’s other locations. To estimate the extent of lost or stolen property at these seven locations, we selected a probability sample of 250 items from a population of 7,211 IT items that had a book value of over $19 million. Because we followed a probability procedure based on random selections with each item having an equal chance of being selected, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Based on this sample, we estimate the number, the percent, and the dollar amount of lost or stolen property at IHS. The 95 percent confidence intervals for each of these estimates is summarized below: We considered equipment to be lost or stolen if (1) we could not physically observe the item during the inventory; (2) IHS could not provide us with a picture of the item, with a visible bar code and serial number, within 2 weeks of our initial request; or (3) IHS could not provide us with adequate documentation to support the disposal of the equipment. We performed appropriate data reliability procedures for our physical inventory testing at IHS Headquarters and sample testing at the seven case study locations including (1) testing the existence of items in the database by observing the physical existence of all items at IHS headquarters and IT equipment selected in our sample, and (2) testing the completeness of the database by performing a 100 percent floor-to-book inventory at IHS headquarters and judgmentally selecting inventory items in our sample to determine if these items were maintained in IHS inventory records. Although our testing of the existence and completeness of IHS property records determined that IHS inventory records are neither accurate nor complete, we determined that the data were sufficient to perform these tests and project our results to the population of IT equipment. In addition, we interviewed IHS agency officials, property management staff, and other IHS employees. We also interviewed Department of Health and Human Services (HHS) officials concerning the migration of the Property Management Information System (PMIS) and officials at the Program Support Center (PSC). Although we did not perform a systematic review of IHS internal controls, we identified key causes of lost and stolen property and wasteful spending at IHS by examining IHS and HHS policies and procedures, conducting interviews with IHS officials, and our observations of property through our inventory testing. We conducted our forensic audit and related investigations from June 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. Despite IHS efforts to obstruct our review, we were still able to accomplish our objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. For further information about this report, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. In addition to the individual named above, the following made contributions to this report: Verginie Amirkhanian, Erika Axelson, Joonho Choi, Jennifer Costello, Jessica Gray, Richard Guthrie, John Kelly, Bret Kressin, Richard Kusman, Barbara Lewis, Megan Maisel, Andrew McIntosh, Shawn Mongin, Sandra Moore, James Murphy, Andy O’Connell, George Ogilvie, Chevalier Strong, Quan Thai, Matt Valenta, and David Yoder.
In June 2007, GAO received information from a whistleblower through GAO's FraudNET hotline alleging millions of dollars in lost and stolen property and gross mismanagement of property at Indian Health Service (IHS), an operating division of the Department of Health and Human Services (HHS). GAO was asked to conduct a forensic audit and related investigations to (1) determine whether GAO could substantiate the allegation of lost and stolen property at IHS and identify examples of wasteful purchases and (2) identify the key causes of any loss, theft, or waste. GAO analyzed IHS property records from fiscal years 2004- 2007, conducted a full physical inventory at IHS headquarters, and statistically tested inventory of information technology (IT) equipment at 7 IHS field locations in 2007 and 2008. GAO also examined IHS policies, conducted interviews with IHS officials, and assessed the security of property. Millions of dollars worth of IHS property has been lost or stolen over the past several years. Specifically, (1) IHS identified over 5,000 lost or stolen property items, worth about $15.8 million, from fiscal years 2004 through 2007. These missing items included all-terrain vehicles and tractors; Jaws of Life equipment; and a computer containing sensitive data, including social security numbers. (2) GAO's physical inventory identified that over 1,100 IT items, worth about $2 million, were missing from IHS headquarters. These items represented about 36 percent of all IT equipment on the books at headquarters in 2007 and included laptops and digital cameras. Further, IHS staff attempted to obstruct GAO's investigation by fabricating hundreds of documents. (3) GAO also estimates that IHS had about 1,200 missing IT equipment items at seven field office locations worth approximately $2.6 million. This represented about 17 percent of all IT equipment at these locations. However, the dollar value of lost or stolen items and the extent of compromised data are unknown because IHS does not consistently document lost or stolen property and GAO only tested a limited number of IHS locations. Information related to cases where GAO identified fabrication of documents and potential release of sensitive data is being referred to the HHS Inspector General for further investigation. GAO also found evidence of wasteful spending, including identifying that there are about 10 pieces of IT equipment for every one employee at headquarters. GAO's investigation also found computers and other IT equipment were often assigned to vacant offices. GAO identified that the loss, theft, and waste can be attributed to IHS's weak internal control environment. IHS management has failed to establish a strong "tone at the top," allowing property management problems to continue for more than a decade with little or no improvement or accountability for lost and stolen property and compromise of sensitive personal data. In addition, IHS has not effectively implemented numerous property policies, including the proper safeguards for its expensive IT equipment. For example, IHS disposed over $700,000 worth of equipment because it was "infested with bat dung."
Essentially, HCFA’s calculation of its per-enrollee (capitation) rate in each county can be expressed as follows: Medicare pays risk HMOs a fixed amount per enrollee—a capitation rate—regardless of what each enrollee’s care actually costs. Medicare law stipulates that the capitation rate be set at 95 percent of the costs Medicare would have incurred for HMO enrollees if they had remained in FFS. In implementing the law’s rate-setting provisions, HCFA estimates a county’s average per-beneficiary cost and multiplies the result by 0.95.The product is the county adjusted average per capita cost rate. HCFA then applies a risk-adjustment factor to the county rate. Under HCFA’s risk-adjustment system, beneficiaries are sorted into groups according to their demographic traits (age; sex; and Medicaid, institutional, and working status). These traits serve as proxy measures of health status. HCFA calculates a risk factor for each group—the group’s average cost in relation to the cost of all beneficiaries nationwide. For example, in 1995 the risk factor for younger seniors (65- to 70-year-old males) was .85, whereas for older seniors (85-year-old or older males) it was 1.3. HCFA uses the risk factor to adjust the county rate, thereby raising or lowering Medicare’s per capita payment for each HMO enrollee, depending on the individual’s demographic characteristics. For HCFA’s rate-setting method to produce appropriate rates, the risk adjusters must reliably differentiate among beneficiaries with different health status. Much has been written about the inadequacy of Medicare’s risk adjuster to account for the tendency of HMOs to experience favorable selection. More than a decade of research has concluded that beneficiaries enrolling in HMOs are, on average, healthier than those remaining in FFS.Studies of pre-1990 data found that Medicare HMO enrollees—in a period just prior to their HMO enrollment—had health care costs that were from 20 percent to 42 percent lower than those of FFS beneficiaries with the same demographic characteristics. Studies of post-1990 data also showed costs of Medicare HMO enrollees ranging from 12 percent to 37 percent lower than those of their FFS counterparts. The problem for Medicare posed by favorable selection is that HMO enrollees are healthier than FFS beneficiaries within the same demographic group; for example, 70-year-old males in HMOs are, on average, healthier than 70-year-old males in FFS. Medicare’s risk adjuster is said to be inadequate because, while making broad distinctions among beneficiaries of different age, sex, and other demographic characteristics, it does not account for the significant health differences among demographically identical beneficiaries. The cost implications of health status differences can be dramatic for two demographically alike beneficiaries: one may experience occasional minor ailments while the other may suffer from a serious chronic condition. Devising a risk adjuster sensitive enough to capture health status differences, however, is such a technically complex and difficult task that years of independent research and HCFA-sponsored research have not yet produced an ideal risk adjuster. In reports issued in 1994 and 1995, we identified several promising, practical risk adjusters and suggested that HCFA implement an interim improvement. Independent of risk adjustment, modifying the method for calculating county rate would help reduce Medicare’s excess HMO payments. HCFA currently estimates the average Medicare costs of a county’s beneficiaries using the costs of only those beneficiaries in Medicare’s FFS sector. This method would be appropriate if the average health cost of FFS beneficiaries were the same as that of demographically comparable HMO enrollees. However, in counties where there are cost disparities between Medicare’s FFS and HMO enrollee populations, this method can either overstate the average costs of all Medicare beneficiaries and lead to overpayment or understate average costs and lead to underpayment. Suppose a county has 1,000 Medicare beneficiaries with identical demographic characteristics. Of these, 800 beneficiaries are in Medicare’s FFS program and cost Medicare on average $100 a month. The remaining 200 beneficiaries are enrolled in HMOs, but these beneficiaries would have cost an average of $75 a month had they remained in the FFS program. For all 1,000 beneficiaries, the county average cost would be $95 a month. HCFA’s method excludes the HMO enrollees with their lower costs from its calculations, producing a county average of $100 a month. Consequently, HCFA overestimates this county’s average monthly cost by $5, producing $1,000 a month in excessive Medicare payments to HMOs (200 beneficiaries times $5). The difficulty in correcting this problem comes from the inability to observe the costs HMO enrollees would have incurred if they had remained in the FFS sector. In the illustration above, HCFA needs a way to estimate that the beneficiaries enrolled in HMOs would have cost $75 a month in the FFS sector rather than $100. Therefore, we developed a method to estimate HMO enrollees’ expected FFS costs using information available to HCFA. Our method consists of two main steps: First, we computed the average costs of new HMO enrollees during the year before they enrolled—that is, while they were still in FFS Medicare. These FFS costs are available through HCFA’s claims data. Next, we adjusted this amount to reflect the expectation that an enrollee’s use of health services will, over time, rise. Having completed these steps, we combined the result with an estimate of the average cost of FFS beneficiaries. This new average produced a county rate that reflected the costs of all Medicare beneficiaries. Thus, our method helps prevent biasing HMO payments with either overgenerous estimates of enrollees’ initial health costs or low estimates that fail to compensate for the likelihood of rising health costs over time. The technical details of this approach are discussed in appendix I. To illustrate the effect of our approach, we analyzed data for counties with different shares of beneficiaries enrolled in HMOs. We found that our method could have reduced excess payments by more than 25 percent. Substantially better risk adjustment, which appears to be years away from implementation, would target the remaining 75 percent of excess payments. Specifically, for the counties that we analyzed, we estimated that total excess payments in 1995 amounted to about $1 billion of the roughly $6 billion in total Medicare payments to risk HMOs in the state. (App. III discusses excess payment estimates in further detail.) Applying our method for setting county rates would have reduced the excess by about $276 million. We also found that the excess payments attributable to inflated county rates were concentrated in 12 counties with large HMO enrollment and ranged from less than 1 percent to 6.6 percent of the counties’ total HMO payments, representing between $200,000 and $135.3 million. (See table 1.) Despite the size of these amounts, the application of our method would have produced relatively small changes in the monthly, per-beneficiary capitation payments, ranging from $3 to $38. The excess payments shown in table 1 reflect the difference between Medicare’s county rates and rates calculated by our method. As shown in the table, five counties accounted for more than 90 percent of the state’s county-rate excess payments. Our analysis did not support the hypothesis, put forward by the HMO industry and others, that the excess payment problem will be mitigated as more beneficiaries enroll in Medicare managed care and HMOs progressively enroll a more expensive mix of beneficiaries. Our data—from counties with up to a 39-percent HMO penetration—indicated that excess payments as a percentage of total HMO payments were higher in counties with higher Medicare penetration. For example, as seen in figure 1, the four counties with the highest rates of excess payment, ranging from 5.1 to 6.6 percent, were also among the counties with the highest enrollment rates. If the relationship between enrollment and excess payments we found for California in 1995 persists, excess payments are likely to grow. The recent trend in Medicare HMO enrollment suggests continued growth in the next several years. Therefore, some counties with moderate enrollment today may experience higher enrollment rates in the future, exacerbating the excess payment problem. (See app. III, table III.1, for estimates of future excess HMO payments in California based on projected enrollment.) Because the data we used to estimate HMO enrollees’ costs come from data that HCFA compiles to update HMO rates each year, our method has two important advantages. First, HCFA’s implementation of our proposal could be achieved in a relatively short time. The time element is important, because the prompt implementation of our method would avoid locking in a current methodological flaw that would persist in any adopted changes to Medicare’s HMO payment method that continued to use either current county rates as a baseline or FFS costs to set future rates. Second, the availability of the data would also make our proposal economical: we believe that the savings to be achieved from reducing county-rate excess payments would be much greater than the administrative costs of implementing our modification. We recognize that for counties with little or no HMO enrollment, HCFA’s current method of estimating the county rate would yield virtually the same result as our method because the small number of HMO enrollees is overwhelmed by the large number of FFS beneficiaries and has only a minimal effect on average FFS costs. Thus, HCFA could decide to use a beneficiary enrollment threshold for computing revised county rates. Medicare’s HMO rate-setting problems have prevented it from realizing the savings that were anticipated from enrolling beneficiaries in capitated managed care plans. In fact, enrolling more beneficiaries in managed care could increase rather than lower Medicare spending—unless Medicare’s method of setting HMO rates is revised. Our method of calculating the county rate would have the effect of reducing payments more for HMOs in counties with higher excess payments and less for HMOs in counties with lower excess payments. In this way, our method represents a targeted approach to reducing excess payments and could lower Medicare expenditures by at least several hundred million dollars each year. Furthermore, because some proposals to reform Medicare HMO rate-setting rely on current county payment rates as a benchmark, correcting the current county rates would avoid locking in varying degrees of excess payments across counties for years to come. We recommend that the Secretary of Health and Human Services direct the HCFA Administrator to incorporate the expected FFS costs of HMO enrollees into the methodology for establishing county rates using the method we explain in this report and adjust Medicare payment rates to risk contract HMOs accordingly. In commenting on a draft of this report, HHS agreed that, because Medicare HMO enrollees tend to be healthier than FFS beneficiaries, the current payment methodology may have resulted in Medicare’s overpaying HMOs substantially—according to HHS, by $1 billion in fiscal year 1996. HHS noted that the President’s fiscal year 1998 budget proposes to address the excess payment problem by lowering HMO capitation rates in calendar year 2000 and developing a new payment system to be phased in beginning in 2001. However, our recommended rate-setting change could be implemented much sooner and would continue to be useful after HCFA develops a new HMO payment system. Although HHS did not question that our recommended rate-setting change would save hundreds of millions of dollars each year for Medicare and taxpayers, the Department doubted the change would be equitable and relatively easy to implement. However, our approach to reducing excess payments is equitable because it is targeted—in contrast to HHS’ proposed across-the-board cut—and would reduce payments only in those counties where HMOs receive excess payments. Furthermore, our recommended change should require very little additional HCFA staff time and no collection of new data. (See app. IV for the full text of HHS’ comments and our response.) As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies to the Secretary of Health and Human Services; the Director, Office of Management and Budget; the Administrator of the Health Care Financing Administration; and other interested parties. We will also make copies available to others upon request. This work was done under the direction of William J. Scanlon, Director, Health Financing and Systems Issues. If you or your staff have any questions about this report, please contact Mr. Scanlon at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix V. Despite evidence from a number of studies that health maintenance organization (HMO) enrollees tend to be healthier than demographically comparable fee-for-service (FFS) beneficiaries (“favorable selection”), the Health Care Financing Administration (HCFA) rate-setting method implicitly assumes that the health service needs of both groups are the same. To the extent that favorable selection occurs, HCFA’s assumption increases the capitation rates HCFA pays to risk HMOs and results in excess payments. This appendix describes how making more realistic assumptions concerning the health status of HMO enrollees can partially correct the excess payment problem. In essence, our approach determines the extent to which HCFA’s method overestimates average Medicare FFS costs and thus inflates the county rate—one component of HMO capitation payments. This appendix also briefly discusses a related method for estimating aggregate excess payments. The basic steps HCFA takes to determine capitation payments can be described as follows. HCFA calculates the per capita costs in Medicare FFS, or standard average cost (SAC). This is done for each county, partly to allow for geographic differences in medical prices. The basic capitation rate, or county rate, is set at 95 percent of the county per capita cost. That is, COUNTY = 0.95 SAC. Finally, payments for specific individuals are adjusted up or down on the basis of a limited set of demographic factors, or “risk factors.” These risk factors are intended to partially adjust for differences in expected health care costs of beneficiaries of different ages, gender, and so on. Excess payments can occur if HMOs enroll a group of beneficiaries that is healthier than the average FFS beneficiary and the capitation rate is not sufficiently adjusted for the differences in health status. In HCFA’s current method, favorable selection can cause excess payments, partly because HCFA’s risk factors inadequately adjust for differences in beneficiaries’ health status and partly because SAC overstates the costs of serving HMO enrollees. HCFA’s risk factors adjust for favorable selection using five characteristics (age, sex, Medicaid eligibility status, institutional status, and working status) that are relatively poor predictors of beneficiaries’ health care needs. Specifically, the risk factors are a set of weights—intended to reflect the relative health risk of each beneficiary—used to adjust the basic capitation rate up or down. For example, the weight assigned to 65- to 70-year-old males was .85 in 1995, implying that they had a greater health cost risk—higher expected health costs—than 65- to 70-year-old females, whose weight was .70. Beneficiaries with the same risk factor are assumed to have the same relative health service needs. However, if 70-year-old males enrolling in HMOs tend to be healthier than the 70-year-old males who remain in FFS, then the risk factor will overcompensate for the enrollees’ costs and the HMOs are said to have benefited from favorable selection. If HMOs’ enrollees tend to be healthier than the average beneficiary in FFS, then HCFA’s method will overestimate the expected cost of serving Medicare beneficiaries in FFS. The foundation of the rate-setting formula consists of the standard average cost to Medicare of a county’s FFS beneficiaries. (By standard, we mean this cost measure is normalized for differences in each county’s demographic composition, relative to the national average). HCFA calculates SAC from the costs of FFS program beneficiaries alone (SACFFS). However, to the extent that the health care costs of Medicare’s HMO enrollee population are lower, on average, than those of beneficiaries in FFS, the exclusion of HMO enrollees’ costs (that is, what they would have cost Medicare in FFS) causes SAC and, ultimately, the capitation rate, to be too high. A better way to set Medicare HMO rates would be based on a SAC that reflected both the costs of beneficiaries in FFS (SACFFS) and what the costs of HMO enrollees would have been if they had been in FFS (SACHMO). Setting rates this way would lessen the amount of adjustment needed to reflect differences in health status because HMO enrollees’ expected FFS costs would already be included. The estimated average cost for all beneficiaries in the county could be calculated as a weighted average of SACFFS and SACHMO, where pFFS and pHMO are the proportions of county beneficiaries in FFS and HMOs, respectively. (See equation 2.) However, because HCFA cannot directly observe what the FFS costs would have been for beneficiaries currently enrolled in HMOs (SACHMO), the agency assumes that the averages for the two groups are equal. If relatively healthy beneficiaries enroll in HMOs while less healthy beneficiaries remain in Medicare FFS, however, SACHMO will be less than SACFFS. By assuming the two costs are equal, HCFA overstates the expected cost of serving HMO enrollees under FFS. This overestimate increases as the gap between SACFFS and SACHMO widens and can increase as the proportion of beneficiaries in HMOs (pHMO) increases. Because SAC forms one of the building blocks in the capitation rate formula, overestimating SAC leads to excess payments to HMOs. The following examples illustrate how, in the presence of favorable selection, HCFA’s calculation of SAC and COUNTY results in excess payments to HMOs. If a county had 10 demographically identical beneficiaries, 8 of whom cost Medicare nothing each year and 2 who cost $2,000 each, the county’s average per capita cost, or SACALL, would equal $400 ($4,000 divided by the 10 beneficiaries). If no beneficiaries were enrolled in HMOs, SACFFS would equal SACALL, or $400. In contrast, if two beneficiaries costing Medicare nothing had joined HMOs, SACFFS—on the basis of the eight remaining FFS beneficiaries—would equal $500 ($4,000 divided by eight). Under HCFA’s method, COUNTY would be $500 .95—reflecting just the average costs of beneficiaries in the FFS sector—instead of $400 .95. Thus, Medicare would pay HMOs $100 .95 more than if capitation rates were based on the actual average expected FFS cost of all beneficiaries in the county. Furthermore, the enrollment of additional beneficiaries with low costs in the county’s HMOs would widen the disparity between SACFFS and SACALL. For example, if six beneficiaries costing Medicare nothing had joined HMOs, SACFFS would equal $1,000 ($4,000 divided by the four beneficiaries still in FFS) or more than double SACALL’s value of $400. In this case, Medicare’s payments to HMOs would be based on a COUNTY equal to $1,000 .95 instead of the appropriate $400 .95. We developed a method to estimate the potential FFS costs for HMO enrollees that allows calculation of average FFS cost estimates based on all beneficiaries living in the county (SACALL). We identified the FFS cost experience of recent risk HMO enrollees prior to their HMO enrollment. Drawing on these prior-use cost data and data on changes in individuals’ health costs over time, we estimated the expected costs (on an FFS basis) of people who had been enrolled in an HMO for different periods of time. Finally, we combined these estimates to calculate SACHMO, which reflected the characteristics of the county’s HMO enrollees, including the length of time they had been HMO enrollees. This “prior-use” cost approach is necessary because no other relevant cost data are currently available to HCFA. After a beneficiary enrolls in an HMO, HCFA receives no information on the health care services provided to the beneficiary or their costs. We made adjustments to respond to two major criticisms of previous studies that employed prior-use costs to estimate expected post enrollment costs. 1. Unadjusted prior-use estimates do not allow for the possibility that enrollees’ average expected costs can regress toward the mean cost of FFS beneficiaries. That is, as time passes, enrollees’ average costs can rise and approach the average costs of the FFS beneficiaries, rather than remain at their preenrollment levels. If this happens, the disparity between the prior-use costs of HMO enrollees and the costs of comparable FFS beneficiaries overstates the actual difference in cost that exists in years following enrollment.2. Unadjusted prior-use estimates underrepresent enrollees’ “death costs.” Unadjusted prior-use cost methodologies cannot take account of the full costs associated with death for enrollees, because beneficiaries must survive the prior year to enroll. Not making these adjustments could result in an overestimate of excess Medicare HMO payments. In developing our method to approximate SACHMO, we struck a balance between two potentially conflicting goals: (1) minimizing the computational burden and (2) maximizing the accuracy of the enrollees’ expected FFS cost estimate. The particular assumptions and modifications of our augmented prior-use methodology are detailed below. We recognize, however, that other approaches to approximating SACHMO could also result in slightly different, but equally plausible, estimates of enrollees’ expected FFS costs. Once we estimated SACHMO, we used the proportions of beneficiaries in FFS and HMOs to compute SACALL. (See equation 2.) Because we also knew actual HMO payments for each county, we could use our new estimates to compute estimates of county rate excess payments. Because Medicare allows beneficiaries to switch among specific HMOs or between an HMO and FFS monthly, we classified beneficiaries according to the number of months they spent in a risk HMO or FFS during calendar years 1991 and 1992. We defined beneficiaries as enrollees (in risk HMOs) if they were Medicare eligible in 1991 and were enrolled in a risk contract HMO at least 7 months in 1992. We assigned beneficiaries who died in 1992 to the enrollee category if (1) they died while enrolled in a risk contract HMO and (2) it would have been feasible for them to have completed 7 months enrolled in an HMO in 1992 had they lived all 12 months of 1992. To estimate SACHMO, we needed to develop FFS cost estimates for those beneficiaries soon to enroll in HMOs. Therefore, we created the category of joiners, a subset of enrollees. Joiners are beneficiaries who spent at least 6 months in FFS in 1991 and at least 7 months in a risk HMO in 1992. To estimate SACFFS, we used FFS costs for beneficiaries who spent at least 6 months in FFS in both 1991 and 1992. Beneficiaries who died in 1992 and did not meet the criteria for inclusion in the enrollee category, but who were enrolled in FFS for at least 6 months in 1991, were assigned to the FFS category. We adjusted prior-year cost data of joiners to approximate average costs in the base year for enrollees because their costs (on an FFS basis) are unobserved while they are HMO enrollees. (See table I.1 for a summary of how we adjusted prior-use costs.) In each case, we constructed average monthly costs using total Medicare claims paid and months of FFS eligibility. The assumptions and adjustments we made to assign costs to the enrollee category of beneficiaries are described in the following sections. 1991 costs of people who joined an HMO in 1992 (joiners) Costs increased to account for RTM effect 1991 costs of all FFS beneficiaries People who died within the sample year (1992) In estimating SACHMO, we used the prior-use costs of joiners as a baseline in estimating the (unobserved) expected FFS costs of all HMO enrollees. Adjusting these baseline costs for regression toward the mean and death costs translates the joiners’ costs into enrollees’ costs. Our analysis of HMO enrollees from several years suggested that new HMO enrollees (joiners) in a given year tend to be similar—in terms of cost histories prior to joining an HMO—to longer-term HMO enrollees. Therefore, we assumed that enrollees’ costs could be estimated by adjusting joiners’ costs for expected cost changes after enrollment. This assumption enabled us to estimate costs for all HMO enrollees on the basis of a subset who had FFS costs in the prior year. (If the data had not supported this assumption, we would have had to collect FFS costs on all HMO enrollees prior to their enrollment. Because some enrollees had been HMO enrollees for several years while Medicare eligible, this more comprehensive task would have required complex adjustments to account for changes in price levels, medical practice patterns, and technology across years. In fact, such an approach would not have been possible for beneficiaries who enrolled in an HMO upon becoming Medicare eligible.) We tested our assumption that joiners’ costs—with some adjustments—are representative of enrollees’ costs by examining joiners’ costs over several years. Noting that most enrollees were joiners in earlier years, we examined whether the relationship of joiners’ costs in the base year to average costs of those remaining in the FFS system was similar to the relationship of joiners’ costs in earlier years, relative to FFS beneficiaries’ costs. We found that the ratio of joiners’ to FFS beneficiaries’ costs remained relatively stable over time. Therefore, we concluded that joiners’ costs (in the base year) are representative of the just-prior-to-enrollment costs of enrollees from many years before the base year. The ratio of joiners’ costs to FFS beneficiaries’ costs showed no trend and did not differ greatly from year to year. In fact, in all the years we examined, the ratio varied by less than 10 percent of its 3-year average.This suggests that, relative to FFS beneficiaries, soon-to-be HMO enrollees in 1992 and 1993 (who constituted about 25 percent of all HMO enrollees in 1994) were very similar to soon-to-be HMO enrollees in 1994. Ratios for each of three California counties for the years 1992 through 1994 are shown in table I.2. After a beneficiary joins an HMO, it is hypothesized that the beneficiary’s cost is likely to increase relative to his or her FFS costs in the year prior to enrolling. Such cost increases seem likely for two reasons. First, beneficiaries may postpone discretionary care in the months prior to joining an HMO so that they can take advantage of HMOs’ typically lower copayments. Second, beneficiaries may be more likely to join HMOs during a spell of unusually good health. This expectation that costs increase is known as “regression toward the mean” (RTM). To the extent that RTM occurs, unadjusted prior-use costs of joiners understate the initial average health care costs of new HMO enrollees, as well as the costs of all HMO enrollees. HCFA’s method for determining HMO capitation rates implicitly assumes that RTM is full (100 percent) and immediate. That is, HCFA assumes that, upon enrolling in an HMO, joiners’ costs immediately increase to equal the average cost of FFS beneficiaries. Although it is reasonable to expect some RTM, no evidence supports a 100-percent effect that occurs so soon after enrollment. We estimated the degree of RTM likely to occur and used this estimate to adjust joiners’ prior-use costs so they more accurately represented all enrollees’ costs. We derived our estimate of the regression effect, which we term the “regression-toward-the-mean adjustment factor” (RTMF), from actual FFS cost data for beneficiaries whose cost and demographic characteristics resembled those of joiners and from the actual distribution of enrollees’ HMO tenure. Our analysis of 1995 data suggested that the RTMF was about half of the maximum potential effect—50 percent, as opposed to the 100-percent RTMF that HCFA’s methodology implicitly assumes. (For further discussion of the RTMF, see app. II.) Because new HMO enrollees, by definition, do not die during the period just prior to their enrollment, prior-use cost data understate the costs of HMO enrollees who die during the year. The costs associated with the final months of life—“death-related costs”—are typically substantial. Consequently, we accounted for them to avoid underestimating SACHMO. We assumed that the costs of an HMO enrollee who died equal the costs of an FFS beneficiary who died. To find the average cost estimate for the deceased, we divided the calendar year total costs of all FFS beneficiaries deceased in 1991 in each county by the number of months those beneficiaries were alive during the year. Our adjustment was equivalent to imposing a 100-percent RTM effect on the costs of HMO enrollees who died during the base year. Because favorable selection can result in HMOs’ having lower mortality rates than FFS, we imputed death costs only for HMO enrollees who died during the year. This approach accounted for excess payments to HMOs in counties where mortality rates were lower in HMOs than in FFS. After estimating the average expected costs of serving all of a county’s beneficiaries in FFS (SACALL), we could estimate the excess capitation payments that resulted from HCFA’s method of calculating SAC and the county rate. The formula for determining capitation rates can be expressed as the following: However, HCFA estimates average costs using only beneficiaries actually in FFS, so that HCFA’s formula is actually this: Consequently, the excess capitation rate can be estimated by the following: The risk factor term is specific to individual beneficiaries. On the basis of their demographic characteristics, it can take on values greater or less than 1.0. The total of county rate excess payments for a given county is obtained by summing the individual level excess payment amounts, expressed by equation 5. We applied this methodology to California’s 58 counties to estimate county-rate excess payments for 1995, 1996, and 1997. Our estimates are presented in appendix III. This section describes the steps we followed to estimate aggregate excess payments to HMOs, that is, total excess payments caused by the full effect of favorable selection on the rate-setting formula. Our method compares what Medicare paid for risk contract HMO enrollees to what Medicare would have paid for the same enrollees had they not joined HMOs. Although this method establishes a benchmark for excess payments against which HMO payment reforms can be measured, we do not suggest that HCFA use the methodology described below to adjust capitation rates because it was not designed or tested as a rate-setting methodology. We estimated the average cost of HMO enrollees (ACHMO) using the same prior-use approach described above. After our adjustments for RTM and death-related costs were applied, ACHMO was representative of the costs of a group of HMO enrollees with the demographic characteristics of new HMO enrollees (joiners). We used HCFA’s method to calculate a county average capitation rate. Because ACHMO reflected the demographic characteristics of only joiners, we calculated the average capitation rate for the joiner population (CAP_RATEJAVG) so that it, too, reflected the demographic characteristics of only joiners. Specifically, we adjusted the 1995 county rate up or down according to the average risk factor of that county’s joiners. We calculated the percent aggregate excess payment (PAEP) to risk contract HMOs in each county using the following formula: CAP_RATEJAVG and ACHMO reflect the demographic characteristics only of joiners, but the cost characteristics of all HMO enrollees. Because these terms affect both the numerator and denominator, PAEP is demographically neutral—that is, demographic characteristics are canceled out in the expression. To find aggregate excess payments that corresponded to actual HMO enrollees, we multiplied PAEP by total payments to risk HMOs by county. We applied this methodology to estimate aggregate excess payments to HMOs in California’s 58 counties in 1995. (See app. III.) As explained in appendix I, establishing the Medicare capitation rate for HMOs on the basis of the cost of serving beneficiaries hinges on estimating the expected FFS costs of HMO enrollees (SACHMO). In turn, adequately estimating SACHMO requires adjusting HMO enrollees’ observed prior-use costs for the increases expected to occur after they enroll. This increase has been labeled regression toward the mean because enrollees’ average health costs, which are relatively low before joining the HMO, begin to rise over time and approach (“regress” toward) the average cost of similar beneficiaries who remain in FFS. This appendix describes our methodology to account for the RTM effect, including the high health care costs typically incurred during the last months of life. Although we drew on previous studies, available data required that we develop a new method of adjusting prior-use estimates of enrollees’ costs for RTM. HCFA implicitly assumes than HMO enrollees’ costs fully regress (increase) to the mean of FFS immediately upon enrollment. Studies have generally found that, after a beneficiary enrolls in an HMO, his or her service use and costs rise. Nonetheless, HCFA’s assumption that RTM is full and immediate receives no empirical support in the literature. For example, Beebe found significant increases in the first year after enrollment and moderate increases thereafter. After 3 years, estimated costs of HMO enrollees were 94 percent of those of comparable FFS beneficiaries; by year 6, enrollees’ estimated costs had risen modestly to 96 percent of FFS beneficiaries’ costs. A more recent study by Hill and others found that RTM closed half the gap in costs between HMO joiners and FFS beneficiaries. We allow our estimate of RTMF to differ between groups of beneficiaries, depending on whether they survived or died during the 4-year period that we analyzed. The association between mortality and average costs is well documented by previous studies. For example, Lubitz and others found that people in their last 12 months of life have costs that are significantly higher than those of other Medicare beneficiaries and account for a disproportionate share (about 28 percent) of health care expenditures. Similarly, average costs during the final 2 and 3 years of life, while not as large, are also considerably higher than the average for all beneficiaries.This pattern is illustrated in figure II.1. The relationship between the degree of RTM experienced by HMO enrollees and their proximity to death has not been addressed by previous studies. Nonetheless, it is possible that enrollees surviving different lengths of time after joining an HMO would experience different degrees of RTM. For example, it is plausible that HMO enrollees in their last year of life might experience complete RTM, while those many years from death might experience little. In our analysis, we allowed for the possibility that the appropriate RTM adjustment for a group of beneficiaries may depend on their proximity to death. Table II.1 presents the definitions of the beneficiary categories and the percentage of HMO enrollees (for California in sample year 1992) in each category. To estimate RTMF for enrollees who survive for 4 or more years (category I enrollees), we developed an approach that generally follows Beebe’s 1988 methodology. That is, we used 4 years of longitudinal data on a sample of the FFS Medicare population to track the cost experience over time of two proxy cohorts—one representing HMO joiners and one representing FFS beneficiaries. Our method involved four steps. 1. We randomly drew two samples—one reflecting the distribution of age, sex, and costs of new HMO enrollees (joiners) and the second reflecting the distribution of age, sex, and costs of beneficiaries who remained in FFS. 2. We then computed, for each of 4 years, the ratio of the average annual cost of the proxy HMO joiners to the cost of the proxy FFS beneficiaries. 3. Next, we used these cost ratios to estimate how rapidly and fully the costs of HMO joiners converged toward those of FFS beneficiaries. 4. Finally, we combined the cost ratios with data on HMO enrollees’ tenure within each county to produce a county-specific RTMF. We assembled a longitudinal data set that contained the claims for approximately 1.4 million California beneficiaries who were continuously enrolled in FFS Medicare between 1991 and 1994. Only beneficiaries who were eligible for part A and part B and who remained in the FFS sector for the entire 4-year period were included. People under age 65 who were eligible for Medicare because of a disability and people with end-stage renal disease were excluded. We constructed two proxy cohorts, one with the same demographic mix and 1991 service cost distribution as the Medicare HMO joiners, and the other with the demographics and cost distribution of continuing FFS beneficiaries. To do this, we divided the FFS data set into 10 age and sex subgroups and further divided each subgroup into 25 smaller strata according to the cost of services they received in 1991. We then selected two stratified random samples—one for each proxy cohort—from each demographic subgroup. We limited each sample to 20 percent of the size of its corresponding demographic subgroup within the FFS data set. The sample sizes within each cost stratum were determined by the actual cost distribution of HMO joiners and continuing FFS beneficiaries. Table II.2 lists the cost strata for one demographic subgroup: females aged 65 to 69. Columns 2 and 3 show the percent distribution of the actual FFS and joiner populations across 25 cost categories. For example, among females aged 65 to 69, 19.2 percent of the FFS population and 39.9 percent of the joiner population had no Medicare charges in 1991. Table II.2: 1991 Distribution Across Cost Categories of HMO Joiners and FFS Beneficiaries, 65- to 69-Year-Old Females Because of insufficient representation in the population, beneficiaries with costs in the first year of $100,000 or more were excluded from the analysis. Within each demographic group, we calculated the ratio of the proxy HMO joiner cost average to the proxy FFS cost average for each of 4 years (1991 through 1994). The results are presented in figure II.2, which shows that the pattern of changes in the cost ratios over time displays a high degree of consistency across demographic groups. The weighted average (across demographic groups) of these cost ratios is shown in table II.3. These ratios show how rapidly and fully the costs of the overall proxy HMO joiner cohort are likely to converge toward the costs of the proxy cohort in FFS. Tenure in HMO (in years) Year prior to enrollment (1991) Year 1 (1992) Year 2 (1993) Year 3 (1994) These cost ratios show that HMO enrollee costs (represented by proxy HMO joiners’ costs) are about two-thirds of comparable FFS beneficiary costs in the year before enrollment, suggesting significant favorable selection. However, once beneficiaries enroll, their costs are expected to increase significantly relative to FFS costs in the first year; the proxy HMO cohorts’ costs rose from 64 percent to 85 percent of FFS cost. In the second year of HMO enrollment, enrollees’ relative costs are expected to rise moderately, and they did—from 85 percent to 88 percent. In the third year, enrollees’ relative costs are expected to show a further, slight increase. By the end of the third year, enrollees’ expected costs—as represented by their proxy cohort’s costs—had regressed about 71 percent; the difference between enrollees’ costs and those of FFS beneficiaries had declined from 36 percent to 10 percent. The slight increases in the proxy enrollees’ costs (relative to the FFS beneficiaries’ costs) after the first year suggest that complete regression either will not occur or will take many years. We used the information on the joiners’ estimated cost increases over time (presented in table II.3) to construct an RTMF for each county. Table II.4 illustrates the calculations for a hypothetical county (based on California data). First, we used our estimates to calculate the increase in expected FFS costs of people who had been enrolled in an HMO for 1, 2, or 3 or more years—relative to their prior-use costs. (See table II.4, row 1.) Computing a weighted average of these increases—where the weights reflect the tenure distribution of HMO enrollees in a given county—yielded a county’s RTMF. (A tenure distribution representative of all California counties is presented in table II.4, row 2.) The RTMF of 1.40 combines information about how quickly and fully RTM occurs (row 1) with these data on the tenure of HMO enrollees. Benchmark cost proportion: the cost ratio for each year divided by the cost ratio for the year prior to enrollmentTenure distribution: proportion of HMO enrollees for the county (from actual enrollment data) We could not estimate an RTMF for category II enrollees with the method that we used for category I enrollees. That method requires constructing proxy cohorts of HMO joiners and FFS beneficiaries, but the number of category II enrollees—those who survive between 1 year and 4 years after enrollment—was insufficient to do so. We chose to assume full RTM for the year a joiner died and to apply our estimate of RTMF for category I enrollees to category II enrollees prior to the year they died. Research indicates that individuals’ costs tend to rise most sharply in the months before death, so we assumed the costs of category II enrollees in their year of death regressed fully to the mean of FFS beneficiaries’ costs. With respect to the year or years before this last year of life, when individuals’ costs generally rise less sharply, we applied the category I RTMF estimate to category II enrollees, which represented a significant increase in prior-use costs. If these assumptions over- or underestimate the RTMF for category II enrollees, the effect on the estimate of the county adjusted average per capita cost (AAPCC) rate will be quite small, given the limited number of category II enrollees. The average costs of HMO joiners in the year of their death (in this case 1991) cannot be estimated. After all, joiners must live beyond the prior-use year (1991) to become HMO enrollees. This means that we lacked data to estimate the extent to which category III enrollees’ average costs (in the year of their death) might remain below the costs of comparable FFS beneficiaries. Consequently, to account for enrollees’ death-related costs that prior-use estimates cannot capture, we assigned to HMO enrollees who died in 1992 the costs of FFS beneficiaries with comparable demographic characteristics who died in 1991. Similarly, we used the costs of FFS beneficiaries who died in the prior-use year to approximate the costs of FFS beneficiaries who died in the sample year (1992). By setting the death-related costs of HMO enrollees equal to those of FFS beneficiaries, we assumed that, among category III enrollees, RTM in costs was complete. Although our method for estimating excess payments to HMOs assumed that no difference existed in death-related costs between HMO and FFS enrollees, it did not assume that the respective death rates were equal. As table II.5 shows, the death rates (per 100) of beneficiaries enrolled in HMOs are significantly lower than those of beneficiaries in FFS. This finding is consistent over time and across demographic groups. The lower death rates among HMO enrollees are a measure of favorable selection. Consequently, these lower death rates are partly responsible for the findings of excess payments to HMOs reported in appendix III. Table II.5: Death Rates, per 100, of Aged Medicare Beneficiaries by Demographic Group and Year, 1992-94 To control for differences in the demographic composition of the FFS and HMO populations, population group means are weighted by the proportion of the FFS population in each demographic group. We summarize below the source of empirical evidence we used to estimate the RTM experience for each category of enrollee, and how this evidence was used to arrive at a corresponding RTM adjustment factor. We used FFS data on cohorts of beneficiaries whose costs and demographic characteristics were comparable with those of HMO enrollees to simulate their RTM experience. On the basis of this simulation, we estimated an RTMF (a numerical factor) to adjust the average cost of category I enrollees upward. Because of insufficient sample size of cost strata, we could not conduct a simulation of proxy HMO enrollees’ costs to estimate an RTMF. However, research indicates that individuals’ costs tend to rise most sharply in the months before death. Consequently, we assumed these enrollees’ costs regressed fully to the mean of FFS beneficiaries’ costs. With respect to the year or years before the last year of life (when costs generally rise less sharply), we applied the category I RTMF estimate to category II enrollees. We could not conduct a category I-type simulation. Prior-use data provided only limited insight on the RTM experience for these enrollees. Consequently, we assumed that the costs of category III enrollees displayed complete RTM, that is, that their costs in the sample year were no different on average than costs for comparable FFS beneficiaries. By making these RTM-related adjustments to our prior-use-based estimates of HMO enrollees’ costs, we significantly lowered our estimates of HMO excess payments from what they would have been otherwise. Appendix III presents estimates of excess payments affected by the RTM adjustments described above. This appendix discusses our estimates of the amount of excess payments Medicare has made to California HMOs that participate in its risk contract program, in order to indicate the size and significance of this problem in Medicare’s method of setting capitated rates. The appendix details the savings that could be realized by adopting our method to improve the county rate. These savings are implied by our estimates of county-rate excess payments for the years 1995, 1996, and 1997. The appendix also addresses aggregate excess payments to Medicare HMOs—the sum of county-rate and risk-adjuster-related excess payments—for 1995. To reduce the computational burden, we limited our efforts to the 58 counties of California. Because risk contract program enrollees are concentrated in relatively few states, demonstrating the magnitude of excess payments did not require us to produce estimates for every county nationwide. We selected the counties of California because (1) about 36 percent of all risk contract enrollees reside there, (2) rates of beneficiary enrollment in risk HMOs vary substantially across the 58 counties, and (3) in recent years, California has experienced rapid growth in HMO enrollment. Although our estimates pertain to a large portion of the risk contract program, we cannot project our estimates nationwide or to other states with demographically similar counties. We constructed all our estimates from individual-level claims data, using data from two HCFA sources: (1) the Enrollment Database File (EDB) and (2) the HCFA claim files, which contain Medicare claims submitted by FFS providers. We combined individual expenditure information with EDB data to produce a single enrollment/expenditure file containing information on approximately 4.3 million California residents. Table III.1 presents estimates of county-rate excess payments in dollar amounts and as a percentage of risk contract program expenditures for each county. (The estimates are weighted averages of the excess payments in the rates for aged (parts A and B) and disabled (parts A and B).) The counties are ranked by excess payment amounts for 1997. We have included in table III.1 only those counties for which the number of new risk HMO enrollees exceeded 500 in the base year. With respect to the excluded counties, the county-rate excess payments (in each year) total less than 3 percent of total county-rate excess payments in the state. County-rate excess payment amount (in millions) (continued) County-rate excess payment amount (in millions) Bullets indicate that the estimate was not sufficiently precise to be reported, because the county had fewer than 500 joiners during the base year. These weighted average percentages are the ratios of total excess payments to risk contract program expenditures. Each weighted average pertains only to the counties listed. The weighted averages are not comparable across years because the number of counties differs from year to year. However, the percentages for a given county can be compared across years. Table III.1 shows that, for California in 1996, the estimated excess payments solely attributable to the county rate are substantial. Consequently, elimination of this component of excess payments—in one state—would save Medicare several hundred million dollars annually. This potential saving equals about 5 percent of risk contract program expenditures in California. As rates of risk HMO enrollment increase in future years, county-rate excess payments may increase as well. (As a result, the longer-term savings from eliminating county-rate excess payment could well exceed the immediate savings.) This conclusion follows from three premises: 1. Across counties in each year, the higher the HMO enrollment rate, the higher the county-rate excess payment as a share of risk contract outlays. (More technically, the relationship between the county-rate excess payment—as a share of risk contract outlays—and the share of Medicare beneficiaries in the county enrolled in a risk HMO is positive and statistically significant.) This premise implies that the degree of favorable selection in a county does not decline as enrollment rates rise—at least over their observed range of variation. 2. The enrollment rate for risk HMOs will increase nationwide and in California. 3. As the national and state enrollment rates increase, the number of counties with substantial risk HMO enrollment will increase. In sum, in California, growing enrollment is likely to have two effects on excess payments. The more straightforward effect will be to raise excess payments because a given excess payment per enrollee will be multiplied by a larger number of enrollees. Less obvious, however, will be higher enrollment’s tendency to raise the excess payment per enrollee. That is, if favorable selection continues to occur while HMO enrollment increases, the average cost of beneficiaries remaining in FFS can also increase, leading to higher excess payments per HMO enrollee. As a result of these two effects, the statewide total estimate of county-rate excess payments will increase with HMO enrollment, between 1995 and 1997, from about $276 million to about $413 million. Table III.2 presents our estimates of aggregate excess payment by county. Only those counties for which the number of new HMO enrollees (joiners) exceeded 500 in 1995 are presented in the table. The counties are ranked by excess payment amounts. We estimated that aggregate excess payments totaled about $1 billion in 1995. This amount represents about 16 percent of Medicare’s payments to California HMOs under the risk contract program in 1995. Like county-rate excess payments, aggregate excess payments are concentrated in the five counties ranking highest in risk contract program enrollment. Together, these counties account for more than 75 percent of our estimate of statewide aggregate excess payments. Aggregate excess payment amount (in millions) A comparison of the percentages shown in tables III.1 and III.2 indicates that county-rate excess payments account for roughly one-quarter of aggregate excess payments. This result suggests that, even if the imprecision in the estimates of excess payment due to the county rate were substantial, correction of the county rate on the basis of those estimates would not lead Medicare to underpay HMOs as a group. In effect, the component of aggregate excess payment due to inadequate risk adjustment acts as a cushion for the county-rate correction. The following is GAO’s comment on the Department of Health and Human Services’ letter dated March 26, 1997. In commenting on a draft of this report, HHS agreed that, because of favorable selection, the current payment method results in substantial overpayments to Medicare managed care plans. Moreover, HHS did not dispute that our recommended rate-setting revision would save money. However, HHS cited our proposed revision as potentially “inequitable,” possibly burdensome to implement, and “only an interim measure” until HCFA develops better health status adjusters. As discussed below, we believe that certain features make our recommended revision evenhanded, easy to implement, and important to adopt, regardless of the likely improvements to risk adjustment now under consideration. The details of our reasoning follow. HHS stated that our proposed revision is not equitable because it would differentially affect HMO payments based on the managed care penetration rate within each county. This is not accurate. Nothing in our proposed refinement to the Medicare payment method would tie HMO payments to HMO penetration rates. Our recommendation is to include an estimated FFS cost for HMO enrollees in the formula used to calculate the county rate. By making the estimate of a county’s average Medicare costs more accurate, this revision would reduce payments most in counties where cost disparities between the FFS and HMO beneficiaries are greatest. Our recommended approach would leave the county payment rate unchanged despite high managed care enrollment—if HMO and FFS beneficiaries in a county have the same average cost. HHS also expressed concern that, with the adoption of our revision, counties with relatively low AAPCC rates but high Medicare managed care penetration rates could be “very adversely affected.” Our approach is targeted and would not reduce Medicare rates in counties with no cost disparities between the FFS and HMO beneficiaries. Under our approach, a county with a low AAPCC rate but no cost disparities would see no change in its county payment rate—even if the HMO penetration rate in that county was high. In contrast, an across-the-board payment rate cut—which, as HHS notes, is part of the administration’s fiscal year 1998 budget proposal—would affect high AAPCC and low AAPCC counties equally, regardless of how costly a county’s beneficiaries might be. Our proposed revision would reduce but not eliminate excess HMO payments. Consequently, substantial excess payments would probably remain to cushion HMOs from any resulting reduction in the county rate. (See p. 49.) To illustrate what HHS believes is the potential for our modified payment method to produce inequitable results, HHS constructed an example involving two hypothetical counties. HHS contends that the example shows a paradoxical result: under our modified method, HHS asserts, HMOs in county A would receive higher capitation payments than HMOs in county B even though HMO enrollees in county A are healthier than those in county B. As explained below, this conclusion is incorrect. Our recommendation would yield HMO payment rates in line with Medicare law, because they would be set on the basis of the estimated average FFS cost of all beneficiaries in a county. HHS did not acknowledge that under the current method both counties’ HMOs receive the same rate even though county A HMOs serve healthier beneficiaries than county B HMOs. Our method would reduce excess payments to HMOs in both counties, although HMOs would still receive payments exceeding their enrollees’ expected per capita costs. Moreover, our method would increase payments to HMOs in counties experiencing adverse selection—that is, in instances where a county’s HMOs have enrollees whose expected costs exceed those of FFS users. HHS’ example also runs counter to the experience of the counties we examined. Our data show that counties with low HMO penetration rates tend to have low excess payments relative to counties with high penetration rates. For example, excess HMO payments are lower in Sacramento, which had 5.6 percent of its Medicare beneficiaries enrolled in HMOs, than in Los Angeles, which had 25.5 percent enrolled in HMOs. Nonetheless, HHS’ example assumes excess payments and HMO penetration are inversely related (higher penetration rate, lower excess payments). Though some counties may display this pattern, the counties we examined do not. In discussing its example, HHS seemingly endorses the current method of paying Medicare HMOs as an interim strategy and, consequently, considers it appropriate to ignore the problem of large excess payments in counties like A, at least for several years. In contrast, our recommended modification of the current method would reduce excess payments significantly and promptly. While it is true that HMOs in B would be paid less than in A, correcting such discrepancies is the role of improved health status adjusters. HHS commented that our modification to the current payment method may be difficult to implement, citing both conceptual issues and resource requirements. For example, HHS suggested that “the issue of when to begin counting for the regression (toward the mean) effect is problematic” because many beneficiaries switch plans or switch between managed care and FFS. To overcome this potential difficulty, HCFA could consider time spent in various HMOs with brief spells in FFS as continuous enrollment in managed care. If the beneficiary spent a significant length of time in FFS, HCFA could reset the regression effect for that beneficiary to zero. This approach would be conservative in that it would tend to increase the estimated FFS costs of HMO enrollees and thus yield rates favorable to HMOs. In addition, HHS expressed concern that “if separate [RTM factor] estimates are required for each county the burden could be very great.” Separate estimates of RTM factors for each county are not needed. We estimated the RTM factor using statewide data, although we used HMO tenure levels at the county level in conjunction with the RTM factor to adjust county costs. HHS believes that implementing our refinement to the current method would require a significant amount of resources. Given the modest resources (two analysts) that we used in conducting our analysis, and that our proposed change would not entail collecting new data, we believe that the additional resources needed to implement our refinement would be small. Moreover, the likely benefits greatly outweigh such costs. As our report indicates, the payoff from this effort would probably be hundreds of millions of dollars in Medicare savings each year. HHS states that our payment method revision is an interim solution to the HMO overpayment problem. HHS also notes that HCFA is working to develop a new payment methodology incorporating health status adjusters that might be phased in starting in calendar year 2001. Together, these assertions could imply that our approach is unnecessary. Our revision, however, is not an interim solution. It is an important first step toward—and most likely will be a component of—a comprehensive solution. By addressing the effect of favorable selection in the county rate, our revision makes an essential adjustment to the rate on which the rest of an HMO’s capitation payment is based. The revision could be implemented as early as calendar year 1998. This would allow the government, at the very least, 3 years to make partial reductions in excess HMO payments—amounting to saving hundreds of millions of taxpayer dollars in each of those years. Moreover, our recommended correction of the county rate would complement improved health status adjusters to provide the foundation for a more efficient, accurate, and equitable redesign of Medicare’s method of HMO payment. The following team members also made important contributions to this report: James Cosgrove, Assistant Director; Thomas Dowdal, Assistant Director; Craig Winslow, Senior Attorney; George M. Duncan, Senior Evaluator; and Hannah F. Fein, Senior Evaluator. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on Medicare's rate-setting method for paying risk contract health maintenance organizations (HMO), focusing on: (1) the conditions under which Medicare's method can yield payment rates that are too high; and (2) a practical improvement to Medicare's method directed at the problems fostering excess payments. GAO noted that: (1) contrary to the expectations built into Medicare law for paying risk contract HMOs, these HMOs have not produced savings for Medicare; (2) however, Medicare-sponsored research and other studies have found that the program has actually spent more for HMO enrollees than their costs would have been under fee-for-service (FFS); (3) researchers attribute this outcome to favorable selection, or the tendency for healthier-than-average individuals to be enrolled in HMOs; (4) GAO has identified a modification to Medicare's current HMO rate-setting method that could help reduce excess HMO payments; (5) central to the current method is an estimate of the average cost, county by county, of serving Medicare beneficiaries in the FFS sector; (6) the actual rates are set by adjusting the county averages up or down on the basis of each enrollee's likelihood of incurring higher or lower costs, a process known as risk adjustment; (7) although considerable attention has focused on problems with this process, GAO's work centers on a largely overlooked problem regarding the estimates of average county costs, that is, the county rate, commonly known as the AAPCC (adjusted average per capita cost); (8) HCFA's method of determining the county rate excludes HMO enrollees' costs in estimating per-beneficiary average cost; (9) the result is that in counties experiencing favorable selection, HCFA's method overstates the average costs of all Medicare beneficiaries and leads to overpayments; (10) GAO's proposed modification estimates HMO enrollees' expected FFS costs using information available to HCFA; (11) GAO's approach produces a county rate that more accurately represents the costs of all Medicare beneficiaries; (12) in examining the rates HCFA determined for California's 58 counties in 1995, GAO found that applying its approach would have reduced excess payments by about 25 percent, or $276 million; (13) substantially better risk adjustment, which appears to be years away from implementation, would have targeted the remaining 75 percent; (14) GAO also found that Medicare's current method produced a greater overstatement of county average costs in counties with higher Medicare HMO penetration, up to 39 percent; and (15) this finding calls into question the hypothesis put forth by HMO industry advocates and others that the excess payment problem will be mitigated as more beneficiaries enroll in Medicare managed care and HMOs contain a more expensive mix of beneficiaries.
CMS administers 1-800-MEDICARE through a single contractor that operates the telephone help line 24 hours a day, 7 days a week. CMS initially designed 1-800-MEDICARE to assist beneficiaries in obtaining information about Medicare programs, including Medicare’s managed care program, and has since expanded the telephone line to handle increased call volume and additional inquiries, including inquiries about Medicare’s prescription drug benefit. Calls are answered by an automated system and, if requested, callers may be routed to a CSR for additional assistance. The contractor trains CSRs to respond to two types of inquiries—(1) general inquiries about the Medicare program (such as general inquiries about prescription drug coverage or beneficiary address changes) and (2) specific inquiries about some Medicare Part A and Part B claims—for both English- and Spanish-speaking callers. CMS uses performance metrics and indicators to oversee and measure the 1-800-MEDICARE contractor’s performance. To meet the needs of people with LEP, HHS developed an LEP Plan that identifies steps for its agencies, including CMS, to take to improve access for people with LEP to agency programs and activities, including 1-800-MEDICARE. CMS administers 1-800-MEDICARE to answer callers’ inquiries 24 hours a day, 7 days a week about Medicare eligibility, enrollment, and benefits. CMS initially developed 1-800-MEDICARE to assist beneficiaries and other members of the public in obtaining information about Medicare programs, including Medicare’s managed care program, as required by the Balanced Budget Act of 1997 (BBA). In 2003, the MMA required CMS to make information about the Medicare prescription drug benefit available through 1-800-MEDICARE and gave CMS increased flexibility in the administration of Medicare. CMS officials reported that they used this flexibility to transfer responsibility for information about Medicare-related Part A and Part B (including DME) claims calls, which previously had been handled by separate Part A and Part B contractors with their own help line numbers, to 1-800-MEDICARE. CMS completed the transition of Part A and Part B claims calls to 1-800-MEDICARE in September 2007. Each of these initiatives contributed to an increase in the overall volume of calls received by 1-800-MEDICARE. During the period encompassed in our review, overall call volume for 1-800-MEDICARE increased approximately 22 percent, from about 1.6 million calls in July 2005 to more than 2.0 million calls in July 2008. (See fig. 1.) A large increase in volume occurred close to the beginning of the Medicare prescription drug benefit in late 2005 and early 2006. Since that time, monthly call volume has ranged from about 2 million inquiries to slightly more than 3 million. Callers to 1-800-MEDICARE receive information from an automated interactive voice response (IVR) system, a CSR, or a combination of both. Calls to the line are initially answered by the IVR, which responds to voice and electronic prompts by the caller. Callers also may use the IVR to obtain assistance in either English or Spanish. Those who do not indicate a language preference in the IVR and wish to speak to a CSR are automatically routed to a CSR for assistance in English. If the IVR cannot address the needs of the caller or if the caller requests to speak to a person, the call is routed to a CSR for assistance. (See fig. 2.) 1-800-MEDICARE classifies incoming calls to CSRs into two types of requests: Inquiries about general Medicare issues, such as general inquiries about prescription drug coverage or beneficiary address changes. Specific inquiries about claims for Medicare Parts A and B. To provide assistance with these claims inquiries, 1-800-MEDICARE CSRs are able to access certain Part A and Part B claims data. Newly hired CSRs initially receive training to provide services related to general Medicare inquiries and, as they become more proficient, receive additional training to respond to claims inquiries. The 1-800-MEDICARE contractor hires additional, temporary CSRs to handle anticipated increases during 1-800-MEDICARE’s annual peak period from October through mid-January; these CSRs are trained to handle general Medicare inquiries. The IVR routes callers with general Medicare or specific claims inquiries to CSRs according to complexity. The approximately 2,600 CSRs responding to these calls are grouped into two skill levels, or “tiers,” based on their training and experience. (See fig. 3.) Tier 1 CSRs, who made up the majority of CSRs as of January 1, 2008, receive training to respond to simple inquiries about Medicare and claims. Tier 1 CSRs handle the majority of 1-800-MEDICARE inquiries. Between September 2007 and July 2008, Tier 1 CSRs handled over 15.1 million calls—95 percent of the approximately 15.9 million calls handled by CSRs. Approximately 11.8 million of these Tier 1-handled calls were general Medicare inquiries while approximately 3.3 million were claims inquiries. Tier 2 CSRs receive training to assist callers with more complex general Medicare and claims inquiries. For example, callers wishing to enroll in a Medicare Advantage plan would speak to a Tier 2 CSR. These CSRs also can respond to calls at the Tier 1 level if needed. Between September 2007 and July 2008, Tier 2 CSRs handled about 800,000 calls—5 percent of all calls handled by CSRs. Approximately 523,000 of these calls were general Medicare inquiries, while approximately 273,000 were claims inquiries. Particularly complex inquiries, about 1 percent of all calls received, may be referred by Tier 1 or Tier 2 CSRs to a reference center staffed by CSRs with additional training above the level of the Tier 2 CSRs. To provide responses to callers, CSRs in Tier 1 and Tier 2 use defined “scripts”—standard language explaining elements of the Medicare program—that they access using desktop computer software. CSRs listen to a caller’s inquiry and then enter related keywords into this system to generate a list of suggested scripts that could be used to answer the inquiry. The CSRs then select the script they consider best suited to answer the inquiry and read either excerpts or the entire script to the caller. CSRs may also consult other information sources if appropriate. For example, CSRs sometimes use tools available on the Medicare Web site to help beneficiaries select a prescription drug plan. CMS requires CSRs to read scripts in an effort to ensure that all callers receive consistent information from 1-800-MEDICARE—regardless of the expertise of the CSR—and that information being shared is easily understood by the caller. In October 2006, CMS awarded a single performance-based contract for the operation of its 1-800-MEDICARE contact centers valued at approximately $496 million over the life of the contract. This contract consisted of an initial 7-month transition period, during which the current 1-800-MEDICARE contractor assumed full responsibility for 1-800- MEDICARE help line operations, and 2 additional years, referred to as “option years.” The contractor is reimbursed for its costs and receives a fixed base fee. In addition, the contractor can earn an “award fee”—a percent of the total contract amount—based on its performance in meeting specific metric standards and indicator targets that CMS identified as being particularly important to providing services to callers and controlling contract costs. The current metrics and their standards were put in place as of July 2007. Using the standards and targets, CMS evaluates the contractor’s performance three times a year. For each evaluation period, CMS can decide to award all, part, or none of the available award fee amount depending on the contractor’s performance. The current performance metrics quantify and measure the caller experience with 1-800-MEDICARE. Several metrics are specifically related to telephone access, such as the average amount of time callers wait after indicating that they want to speak to a CSR until their calls are answered by the CSR. The 1-800-MEDICARE contractor must meet the specific standards for these performance metrics in order to be awarded up to 15 percent of the award fee. Each evaluation period is 4 months long. If the contractor does not meet a performance metric standard during any month in an evaluation period, that month’s portion of the award fee is withheld at the end of that evaluation period. CMS also has designed several performance indicators that it considers when measuring the performance of the 1-800-MEDICARE contractor. Several of these indicators are specifically related to telephone access, such as the average amount of time a CSR talks to a caller. These performance indicators do not have fixed standards against which the contractor’s performance is measured; rather, CMS has established performance targets and compares the contractor’s actual performance against these targets, using a scale ranging from substandard to superior, to determine the contractor’s award fee. According to CMS officials, the purpose of the targets is to provide a baseline for planning and management of contact center activities. The performance indicators, when combined with other contract elements, account for up to 45 percent of the award fee. CMS also considers the contractor’s performance related to program management and communication, contract compliance, and fiscal responsibility when determining the award fee. These elements, when combined, account for up to 40 percent of the award fee. In addition to the main 1-800-MEDICARE contract, CMS has awarded four additional contracts to individual companies for other activities directly related to the support of 1-800-MEDICARE: A contract with a telephone company to manage the phone lines used by 1-800-MEDICARE. This company also ensures that calls to CSRs are routed to the next available CSR with the skill set to assist the caller. A contract to support and maintain the desktop application used by CSRs, including software CSRs use to access scripts. A contract to manage the NDW, a central data repository that captures, aggregates, and integrates data on 1-800-MEDICARE from multiple sources. A contract—referred to as the TQC contract—to conduct activities that include training, development of the scripts used by CSRs, and quality assurance, such as evaluation of CSR calls. As required by Executive Order 13166, HHS has developed a plan that identifies the necessary steps for the department and its agencies intended to ensure access to timely, quality language assistance services by eligible LEP persons to its programs and activities, such as 1-800-MEDICARE. The HHS LEP Plan, issued in December 2000, identifies seven elements designed to help each HHS agency, program, and activity to meet the department’s goal of providing “access to timely, quality language assistance services to persons.” For example, the Plan includes elements related to oral language assistance services and efforts to assess accessibility and the quality of language assistance activities. (See app. I for a list of the seven LEP Plan elements.) The Plan reflects HHS’s overall goals for improving language access for individuals and includes strategies for improving technical assistance for language access services. HHS officials said that the Plan provides a “road map” for addressing HHS’s goals, while allowing individual operating divisions and agencies, including CMS, some flexibility in implementing the Plan’s elements. The current 1-800-MEDICARE contractor met most standards and some targets for telephone access-related performance metrics and indicators— measures designed to ensure all callers’ access to services—from July 2007 through July 2008. In 10 of 13 months we analyzed, the current 1-800- MEDICARE contractor’s performance met the standards for each of the required access-related performance metrics. Because of waivers granted by CMS, the agency considered the contractor to have met the relevant standards in 12 of the 13 months. While generally meeting the required standards for the three access-related performance metrics, the 1-800- MEDICARE contractor consistently met the target for only one of the three access-related performance indicators we analyzed. However, the amount of time callers waited to access services has varied depending on the type and complexity of callers’ inquiries. The amount of time callers waited to speak with a CSR has increased since December 2005, but the performance standards related to caller wait time also have varied over time. In 10 of 13 months we analyzed (July 2007 through July 2008), the current 1-800-MEDICARE contractor’s performance met the standards for all three telephone access-related performance metrics required under its contract with CMS. Because of waivers granted by CMS, the contractor was considered by the agency to have met the relevant standards in 12 of the 13 months. Implemented in July 2007, the performance metrics and their associated standards were designed to measure, on a monthly basis, the 1-800-MEDICARE contractor’s ability to ensure callers can access services and to determine the contractor’s award fee during each evaluation period, which occur three times a year. The metrics, described below in table 1, are: (1) the average wait time (also referred to by CMS as the average speed of answer), (2) the percentage of unhandled CSR calls, and (3) the percentage of transfers. Average wait time. The 1-800-MEDICARE contractor met the overall monthly average wait time performance standard—between 5 minutes and 8 minutes, 30 seconds each month—in 12 of the 13 months (from July 2007 through July 2008) we analyzed and, because of a waiver granted by CMS, was considered by the agency to have met the relevant standard in all 13 months. (See fig. 4.) CMS granted the waiver for one month’s wait time standard for multiple reasons, including to account for the interruption of normal 1-800-MEDICARE contact center operations because of flooding in the Midwest. During the 13-month period we studied, the average wait time ranged from a low of less than 6 minutes in July 2008 to a high of more than 8 minutes in September 2007. For the 5 months (October 2007 to February 2008) encompassing the most recent annual coordinated election period and a portion of the open enrollment period—when call volume to 1-800-MEDICARE typically increases—the contractor’s monthly average wait time for all calls was between 6 minutes, 30 seconds and 7 minutes. CMS’s monthly average wait time standard during the time period we analyzed allowed longer wait times than were typical at two of the four federal agencies we interviewed that use this metric for their contact centers. These two agencies used average wait time as a performance metric, with performance standards ranging from 4 minutes, 30 seconds to 5 minutes, 30 seconds. CMS officials said they recognized that the 1-800- MEDICARE average wait time performance standard was long, but said that they selected it based on what could be reasonably achieved within the current budgeted amount available for the 1-800-MEDICARE contract. CMS officials also said that they believed that the budget would not allow for a standard that was more comparable to standards used in the contact center industry. However, CMS officials reported that in August 2008, they began requiring the contractor to meet an average wait time standard of between 1 minute and 5 minutes. CMS officials said that in order to better serve callers, they and the 1-800-MEDICARE contractor implemented processes and improved technologies that have increased the efficiency of the help line. For example, CMS and the contractor implemented new staffing initiatives to ensure that 1-800-MEDICARE has the necessary CSRs available to answer calls. Unhandled CSR call rate. The 1-800-MEDICARE contractor met its unhandled CSR call rate standard, keeping the number of unhandled calls within the required standards, in 11 of the 13 months we analyzed and, because of a waiver granted by CMS, was considered by the agency to have met the standard in 12 of 13 months. (See table 2 below.) The unhandled CSR call metric provides CMS with data on the extent to which callers abandon their calls while waiting for a CSR to answer the call, and CMS officials said that they expect the number of abandoned calls to decrease with shorter average wait times. CMS granted the waiver for one month’s unhandled CSR call rate standard because of changes during that month in its expectations of the contractor’s performance. Though the performance metric standard was not changed, CMS encouraged the contractor to keep its average wait time below 8 minutes. To do this, the contractor increased the number of callbacks—instances in which callers are called back within 48 hours rather than remaining on hold. Because CMS includes callbacks in the unhandled CSR call metric, in this month the contractor exceeded the unhandled CSR call standard. An expert in federal call centers indicated that the unhandled call rate standard used by CMS was high compared to some other federal agencies, but that these performance standards are usually based on what level of service an agency can afford to purchase with its contact center budget. Transfer rate. Transfers from one CSR to another occurred for no more than 20 percent of calls in each of the 13 months of data we analyzed, meeting the contractually required performance standard. While CMS officials said that they expect some transfers to occur normally, they also said a monthly transfer rate that exceeds 20 percent may imply that CSRs are inappropriately transferring callers. CSRs transferred nearly 13 percent of calls in July 2007, but the transfer rate increased to just under 20 percent by October 2007. Since October 2007, the transfer rate has generally decreased, with the lowest percentage of transferred calls of the 10 months we studied occurring in June 2008. (See fig. 5.) CSRs can transfer calls to other CSRs for various reasons, including transferring a call from a Tier 1 CSR to a Tier 2 CSR if the complexity of a caller’s inquiry requires a greater degree of training. According to CMS officials, most transfers occur between Tier 1 and Tier 2 CSRs, although transfers can also occur between two Tier 1 CSRs. For example, a caller may have a general Medicare inquiry answered initially by a Tier 1 CSR, but then raise a claims-related inquiry that may be better answered by another Tier 1 CSR with additional training. In addition to the three required performance metrics, CMS also established several performance indicators and associated targets for the 1-800-MEDICARE contractor to meet in order to receive a portion of its award fee. Three of the indicators are particularly related to callers’ telephone access: (1) the average amount of time CSRs spend assisting callers, referred to as average handle time; (2) agent occupancy—the percentage of CSRs answering calls; and (3) forecasting of call volume going to CSRs. (See table 3 for more information on these selected performance indicators.) The 1-800-MEDICARE contractor met its performance target in each of the 13 months between July 2007 through July 2008 for one performance indicator—agent occupancy—but had varying experience in meeting its targets for the average handle time and CSR call volume forecasting. For agent occupancy, the 1-800-MEDICARE contractor met its performance target—to have 80 percent or more of its CSRs answering calls—in each of the 13 months of data we analyzed. During these 13 months, the 1-800-MEDICARE contractor’s agent occupancy ranged from 83 percent to 89 percent. For average handle time, the 1-800-MEDICARE contractor did not meet its performance target—most recently set at 8 minutes—in any of the 13 months analyzed. Average handle time peaked in December 2007 at 11 minutes, 18 seconds and then decreased each month, reaching an average handle time of 8 minutes, 59 seconds in July 2008. CMS officials said that, because there are many factors that could affect call length, they consider average handle time in conjunction with other metrics and indicators to determine if the contractor is managing CSRs’ time effectively. The 1-800-MEDICARE contractor met its CSR call volume forecasting target—having a variance of less than 10 percent between its forecasted CSR call volume and actual CSR call volume—in 4 of the 13 months of data we analyzed. During the current contract period, callers with claims inquiries or complex inquiries who needed assistance from a Tier 2 CSR generally experienced longer wait times than other callers. Although the current 1-800-MEDICARE contract does not have an average wait time metric or indicator specifically related to the type of call, CMS’s evaluation reports have noted that the agency expected the contractor to manage wait times by type of call to ensure that a caller with a claims inquiry does not wait longer than a caller with a general Medicare inquiry. Wait times varied by type of call. Callers waited, on average, less time to have general Medicare inquiries answered than to have calls about claims inquiries answered in all but 1 of the 11 months we analyzed under the current contract. During that month—December 2007—the difference between wait times for these two types of calls was 1 second. (See fig. 6.) During the time period we reviewed—ending July 2008—callers with general Medicare inquiries waited between 5 minutes and 8 minutes, 30 seconds on average—a time that matches the performance standard for all calls during that period—in 10 of the 11 months we analyzed. In contrast, claims callers were within that range in only 5 of the 11 months with the longest waits in the first two months of this period—almost 18 minutes in September 2007 and just over 11 minutes in October 2007. In September 2007, about 48 percent of callers disconnected before a CSR could answer their calls. Beginning in November 2007 and continuing through July 2008, average wait times for callers with claims inquiries were more closely aligned with that of callers with general Medicare inquiries. (For more information on differences in average wait times by type of claims call, see app. II.) Wait times varied by complexity of calls. Between September 2007 and July 2008, callers generally waited longer, on average, to speak with a Tier 2 CSR—a CSR with additional training to respond to more complex calls— than to speak with a Tier 1 CSR. (See fig. 7.) The average wait time for a Tier 1 CSR was between 5 minutes and 8 minutes, 30 seconds—a time that matched the performance standard for all calls during that period—in 10 of 11 months we reviewed. In contrast, callers waited more than 8 minutes, 30 seconds to speak with a Tier 2 CSR in 5 of 11 months we reviewed. During the first two months of this period, the average wait time for callers needing to speak with a Tier 2 CSR was much longer—more than 12 minutes in September 2007 and 9 minutes in October 2007. Beginning in November 2007 through July 2008, average wait times for callers needing to speak to a Tier 2 CSR were more closely aligned with that of callers needing to speak to a Tier 1 CSR. (For more information on differences in average wait times by complexity of inquiry, see app. II.) CMS’s performance standards for caller wait times have varied since December 2005, and, since that time, callers have waited longer, on average, to have their calls answered by a CSR. (See fig. 8.) The varying performance standards have allowed for a very short caller wait time, such as requiring 80 percent of calls to be answered in 30 seconds, while at other times allowing for a longer wait time, such as requiring that 80 percent of calls be answered in 10 minutes. During the current contract period, CMS’s performance standards have consistently allowed for a longer average caller wait time. From December 2005 to October 2006, when two contractors operated 1-800-MEDICARE, the average wait time in 8 of 11 months was below 5 minutes—the minimum average wait time standard under the current contract period through July 2008. During the transition period of the current 1-800-MEDICARE contract, October 30, 2006 through May 31, 2007, the average wait time ranged from a low of 2 minutes, 6 seconds to a high of 10 minutes, 22 seconds. From the end of the transition period through July 2008, the monthly average wait times have been at or above 6 minutes, 30 seconds in all but one month, though they have not varied as widely as in prior periods we reviewed. CMS has made efforts to provide LEP callers with access to services through 1-800-MEDICARE by requiring the contractor to provide services in either English or Spanish and to provide interpretation services for callers speaking other languages. To meet these requirements, the contractor uses CSRs who are bilingual in English and Spanish and a telephone interpretation service to assist callers who speak other languages. Spanish-speaking callers waited less time on average to reach a CSR than their English-speaking counterparts in almost two-thirds of the months we reviewed—from December 2005 through July 2008. Officials from the Office of Beneficiary Services (OBIS)—the CMS office with primary responsibility for 1-800-MEDICARE—said they were not aware of the HHS LEP Plan when awarding and assigning the current 1-800- MEDICARE contract and CMS has not identified an office responsible for acting as a point of contact for its management of the LEP Plan. Nonetheless, steps CMS has taken to provide services to LEP callers are consistent with some elements of the HHS LEP Plan adopted by the agency without modification, such as the element related to oral language assistance, but not others, such as the Plan’s element for assessing quality and accessibility, which identifies the need for complaint mechanisms for language issues. As required by CMS, the 1-800-MEDICARE contractor provides callers with service in either English or Spanish and provides interpretation services for callers speaking other languages. The 1-800-MEDICARE contractor uses CSRs who are bilingual in English and Spanish to provide services to Spanish-speaking callers. As of January 1, 2008, slightly more than 7 percent of all CSRs at 1-800-MEDICARE were bilingual. Bilingual CSRs complete the same training required of CSRs who are not bilingual and must successfully handle test calls in both English and Spanish prior to answering 1-800-MEDICARE calls. Like CSRs who speak only English, bilingual CSRs use scripts, translated into Spanish, to provide assistance to callers. To meet CMS’s requirement that the contractor ensure “real-time phone translations” for callers who speak neither English nor Spanish, the 1-800 contractor subcontracts with a “language line”—a telephone interpretation service. Through this language line, English-speaking CSRs have access to interpretation services in more than 150 languages. According to the 1-800- MEDICARE contractor, the language line provider uses internal certification and assessment to assure the quality of its interpreters. The number of languages for which CSRs have requested assistance has increased over time, from 40 languages in 2005 to 73 as of March 2008. The language line also is used to provide interpretation services for Spanish- speaking callers if the wait for a bilingual CSR exceeds 20 minutes. In these cases, Spanish-speaking callers are transferred to non-Spanish- speaking CSRs who then use the language line to assist these callers. According to CMS officials, the cost of language line interpreters and the additional length of time required to connect and handle the call using the language line increases the costs of these calls. Calls needing Spanish interpretation accounted for the majority of calls to the language line during the time period we reviewed, growing from 60 percent of all calls to the line in 2005 to 80 percent of all calls in 2007. However, while the use of the language line for Spanish interpretation has increased, in most months we reviewed more than 90 percent of all Spanish-speaking callers to 1-800- MEDICARE were assisted by bilingual 1-800-MEDICARE CSRs. Spanish-speaking callers who speak with bilingual CSRs frequently experienced shorter average monthly wait times to reach a CSR than their English-speaking counterparts. (See fig. 9.) Spanish-speaking callers waited less time, on average, in slightly more than 60 percent of the months we reviewed (20 of 32), which encompassed parts of both the prior and current contract. While the current 1-800-MEDICARE contract does not have metrics specifically focused on call language, the average wait time experienced by Spanish-speaking callers under the current contract, beginning at the end of October 2006, was consistent with CMS’s performance standard for overall average caller wait times as of July 2008 in almost two-thirds of the months we reviewed. However, Spanish-speaking callers have recently waited slightly longer, on average, than their English-speaking counterparts. In addition, while Spanish-speaking callers with simple claims inquiries have experienced shorter average wait times than English- speaking callers, they have waited, on average, longer for all general Medicare inquiries and more complex claims inquiries. (For more information on differences in average wait times by caller language, see app. II.) Officials from OBIS—the CMS office with primary responsibility for 1-800- MEDICARE—said they were not aware of the LEP Plan when awarding and assigning the current contract for operation of the help line. Rather, they relied primarily on industry best practices when determining how to require the 1-800-MEDICARE contractor to provide LEP services. In addition, while officials from HHS’s OCR and CMS’s OEOCR said CMS chose to adopt the HHS LEP Plan as issued—although HHS allowed its agencies flexibility to modify it—officials from multiple CMS offices were unable to identify an office or official responsible for acting as the central point of contact responsible for the agency’s management of the Plan. A key factor in meeting standards for internal control in federal agencies is defining and assigning key areas of authority and responsibility—such as a point of contact for an agency-wide plan—and communicating that information throughout the organization. Without an office or official responsible for management of the Plan, staff lack a source of guidance that could assist them in taking steps consistent with the Plan to provide services to people with LEP. Although CMS officials did not consider the LEP Plan when determining how services for LEP callers to 1-800-MEDICARE would be provided, steps they have taken to implement language services are consistent with some—but not all—elements of the Plan. In particular, by requiring its contractor to provide services to all LEP callers, CMS has taken a step consistent with the LEP Plan element stating that “each agency, program, and activity… will arrange for the provision of oral language assistance in response to the needs of LEP customers, both in face-to-face and telephone encounters.” In addition, the LEP Plan states that agencies, activities, and programs will implement mechanisms to assess the LEP status and needs of current and potential customers. To do this, CMS officials said they and the contractor regularly review the frequency with which LEP callers contact the 1-800 line to determine appropriate bilingual CSR staffing levels. In addition, CMS officials noted that this information could be used to determine whether CSRs who speak languages other than English and Spanish should be made available to callers. However, CMS has not taken steps consistent with other Plan elements related to 1-800-MEDICARE. For example, CMS officials did not identify, and did not require the contractor to identify, a specific official or office to allow LEP callers to register concerns or complaints regarding language assistance services provided by 1-800-MEDICARE, as indicated by the LEP Plan element for assessing accessibility and quality. In addition, while CMS considers the number and languages of LEP callers currently using 1-800-MEDICARE, consistent with the LEP Plan’s element related to the assessment of needs and capacity, neither the number nor proportion of Medicare-eligible or Medicare-enrolled LEP beneficiaries is specifically considered when planning how 1-800-MEDICARE language services will be provided. The LEP Plan also directs agencies to develop policies and procedures for each Plan element, to designate staff responsible for implementing these policies and procedures, and to provide staff with training on them. However, given that CMS staff within OBIS were not aware of the existence of the LEP Plan until our review, no uniform CMS policies and procedures had been used to implement any elements of the LEP Plan for 1-800-MEDICARE, nor had training specific to the LEP Plan occurred. CMS uses all six of the management practices we identified as commonly used by contact centers to oversee 1-800-MEDICARE callers’ access to services and to accurate information. (See table 4 for a description of these six common oversight practices.) These practices are addressed in the current 1-800-MEDICARE contract and reflected in CMS’s ongoing oversight of the help line. For example, CMS awarded some of the available award fee to the 1-800-MEDICARE contractor for meeting certain performance metrics and used CSR feedback to improve the oversight of information CSRs provide to beneficiaries. In addition, CMS used customer satisfaction surveys to identify areas for improvement, including the IVR, and worked with the contractor to improve call volume forecasting in an effort to ensure appropriate staffing levels to meet callers’ needs. Clearly defining performance metrics and indicators. The performance metrics and indicators that CMS officials said they designed to encourage improved performance and correct identified problems are clearly defined in the 1-800-MEDICARE contract. The current 1-800- MEDICARE contract identifies each metric and indicator, provides a definition for these measures, and sets a standard or target for each measure. CMS uses these measures to evaluate contractor performance three times a year and to provide an award fee if the contractor performs at or above set levels. Because the contractor did not meet all standards and targets for any evaluation period as of May 2008, CMS has awarded only part of the possible award fee to date, in accordance with the award guidance in the 1-800-MEDICARE contract. Award fees are based on a scoring system, in which a percentage of the award fee is distributed depending on CMS’s rating of the contractor’s performance, ranging from the lowest rating, substandard, to the highest rating, superior. CMS rated the contractor as good or very good from the start of the current contract through May 2008. CMS also uses the performance metrics and indicators on a continuous basis to monitor and evaluate callers’ ability to access information from 1-800-MEDICARE. For example, CMS officials said they receive updates from the contractor on the average wait times—a measure of caller access—through daily e-mails. If callers’ average wait time exceeds 5 minutes, CMS officials receive e-mail notification every half hour. During that time, CMS officials said the contractor also notifies them of actions it is taking to improve caller wait times. Customer satisfaction surveys. CMS officials said they use the results of customer satisfaction surveys to gain insight into callers’ experiences with 1-800-MEDICARE services and to identify opportunities for improving services. In June 2008, the TQC contractor assumed responsibility for conducting a customer satisfaction survey of 20 percent of randomly selected 1-800-MEDICARE callers. The TQC contractor will provide CMS with the results from this survey on an ongoing basis; however, initial results were not available as of May 2008. Prior to June 2008, CMS required the 1-800-MEDICARE contractor to call back randomly selected beneficiaries who contacted 1-800-MEDICARE and administer a customer satisfaction survey developed by CMS. CMS officials said they used the information from the surveys to make changes to 1-800-MEDICARE. For example, based on survey participant concerns about the IVR, which is used by 18 to 20 percent of callers to resolve their inquiries, CMS officials indicated they made changes to the IVR prompts to make them easier to follow. CMS officials said that they anticipate future callers to 1-800-MEDICARE will be more willing to use technologies such as the IVR to obtain information. However, as of November 2008, only limited testing and focus groups had been conducted to confirm this anticipated trend. Ensuring accurate information. To ensure that beneficiaries receive consistent and accurate program information, CSRs are required to use scripts approved by the agency and CMS, and contractor officials reported taking steps designed to make correct scripts easier for CSRs to identify and to make scripts easier to understand. In April 2008, the TQC contractor began developing content for 1-800-MEDICARE scripts, which previously had been developed by the 1-800-MEDICARE contractor and reviewed by CMS. The TQC contractor employs staff with expertise in Medicare who develop and revise scripts as needed. CMS reviews and approves all scripts before making them available to CSRs. According to CMS officials, in August 2008 the TQC contractor began quarterly script reviews for accuracy as well as legislative, program, or policy changes. In addition to reviews conducted by the TQC contractor, CMS officials said that they have used the results of customer satisfaction surveys and call evaluations to assess how well scripts meet the needs of callers and how easy scripts are for CSRs to find. CMS can also access a computer application that captures CSR feedback on scripts, including how easy scripts are to understand, an issue identified in previous GAO work. Evaluating CSR interaction with callers. To assess whether callers receive consistent and accurate information from 1-800-MEDICARE, CMS requires both the 1-800-MEDICARE contractor and the TQC contractor to evaluate CSRs’ interactions with callers through call monitoring. The 1-800-MEDICARE contractor listens to four calls a month for each CSR, evaluating the CSRs’ performance on customer service skills identified by CMS, including tone, using scripts appropriately, and completeness of information provided. CMS officials said they designed the 1-800- MEDICARE contractor’s call evaluation process for contractor supervisors to coach CSRs on their performance and to help improve CSRs’ ability to use software to find appropriate scripts. The 1-800-MEDICARE contractor has reported on the required evaluations monthly. Using these evaluation reports, CMS officials said, they worked with the 1-800-MEDICARE contractor to identify and correct trends or issues with scripts, software, or CSR training. For example, CMS used the call evaluation process to determine the reasons why callers may need to be transferred to a more experienced CSR or place another call to 1-800-MEDICARE. This analysis was part of the “First Call Resolution Initiative,” an effort to increase the number of callers who have their inquiries resolved with one phone call rather than many. However, in its evaluation of the 1-800-MEDICARE contractor for the period ending May 2008, CMS reported that it observed callers receiving poor service from CSRs who received perfect call evaluation scores from the 1-800-MEDICARE contractor for those calls. CMS noted the 1-800-MEDICARE contractor needed to improve consistency between actual CSR call evaluation scores and the quality of service callers receive. In addition, since April 1, 2008, CMS has required the TQC contractor each month to monitor and evaluate 600 randomly selected calls in English and 225 randomly selected calls in Spanish. This call sample is designed to allow CMS to generalize trends that emerge from this sample to the call volume of 1-800-MEDICARE as a whole. CMS officials said they anticipate analyzing any trends identified from this call monitoring to note areas for improvement to the 1-800-MEDICARE help line. CMS officials also said that they will use the results of the TQC scores as part of the 1-800- MEDICARE contractor’s regular performance evaluation and resulting award payment beginning in October 2008. In addition, CMS officials reported monitoring calls themselves and meeting weekly with the 1-800- MEDICARE and TQC contractors to listen to and rate recorded calls as a group. CMS officials said that these meetings help to ensure that the contractors understand the standard of service CMS expects callers to receive. CMS officials also planned to perform similar evaluations on calls for which customer satisfaction survey information is available—a practice identified by an industry expert as a method to improve contact center service. Capacity planning. CMS officials said they work with the 1-800- MEDICARE contractor to create short- and long-term call volume forecasts and to determine whether systems and staffing can handle call volume, taking into account Medicare’s annual coordinated election and open enrollment periods when inquiries peak. CMS requires the 1-800- MEDICARE contractor to produce call volume forecasts that are accurate within 10 percent of actual call volumes for the forecasted period. CMS oversight of the contractor’s forecasting efforts has identified significant differences between the long-term forecasts and actual monthly volume of calls going to CSRs—forecasting up to 35 percent more calls than were actually received by CSRs for the performance period ending January 2008 and causing projected staffing costs for this period to be overstated. In its evaluation of the performance period ending January 2008, CMS notified the contractor that performance in this area needed to improve and stated that it wanted the contractor to identify methods of ensuring consistent and accurate forecasts. However, CMS noted that when the forecasted call volume was not realized, the 1-800-MEDICARE contractor adjusted its staffing so that only the number of CSRs needed to meet the performance standard related to average wait times were available. In its evaluation for the period ending May 2008, CMS noted a significant improvement in long- term forecasting and indicated that the 1-800-MEDICARE contractor had a better understanding of events that affected call volume throughout the year. To ensure that 1-800-MEDICARE systems, such as phone lines and desktop software, are available to handle forecasted call volumes, CMS requires the 1-800-MEDICARE contractor to notify the agency of systems outage incidents that affect callers’ access. In the evaluation period ending January 2008, CMS identified inconsistencies in the outage reporting process, which caused key information to be omitted from reports about systems outage incidents. In its evaluation for the period ending May 2008, CMS noted some improvement in reporting systems outage incidents, but indicated that the 1-800-MEDICARE contractor needed to improve the consistency of its reporting practices in the future. CMS officials said they are working with the contractor to address this issue and reported finalizing a process for reporting systems outages in early November 2008. Validation of contractor reports. CMS officials said they validate contractor reports—many of which are used to determine contract award fees—by analyzing data captured by 1-800-MEDICARE computer systems and having regular meetings with their contractors. To ensure the integrity of data collected by 1-800-MEDICARE systems, CMS collects and stores these data in the NDW, which is managed by a separate contractor. CMS officials compare data from the NDW on the actual call center performance to reports submitted by the 1-800-MEDICARE contractor, such as the long-term call volume forecasts. Using this method of validating reports, CMS determined that the 1-800-MEDICARE contractor’s forecasting reports were inaccurate and required the contractor to improve in this area, which the contractor did over the next evaluation period. In addition to validating contractor reports through the NDW, CMS officials said they used weekly and monthly status reports, meetings with contractors, and visits to 1-800-MEDICARE contractor sites to monitor contractor performance between periodic evaluations. To date, the current 1-800-MEDICARE contractor has met most of CMS’s performance standards and some of the performance targets designed to ensure callers’ access to services from the help line. In addition, by employing all six commonly used management practices to oversee 1-800- MEDICARE callers’ access to services and accurate information, CMS gains valuable data to assess the contractor’s performance and identify areas for improvement. In particular, the new TQC contract provides CMS with an opportunity to continue to improve both access to, and accuracy of, information. However, while callers with LEP can access services through 1-800- MEDICARE, CMS has not taken steps to ensure that officials throughout the agency, including within OBIS, are fully aware of the LEP Plan, which HHS designed to be a “road map” for providing appropriate services to this population. By not identifying an official point of contact responsible for management of the Plan, CMS is lacking a key internal control measure—a clearly defined area of responsibility that has been communicated agencywide. While CMS has taken steps to ensure access for LEP callers to 1-800-MEDICARE, a clearly identified office or official responsible for the Plan could provide guidance in areas where steps consistent with the LEP Plan have not been taken and could work to ensure consistent use of the Plan across the agency. To ensure CMS offices, including those that oversee the operation of the 1-800-MEDICARE help line are aware of, and take steps consistent with, the HHS LEP Plan when considering the needs of people with LEP, CMS should designate an official or office with responsibility for managing the LEP Plan. We provided CMS with a draft of this report for its review and comment. The agency provided written comments, which we addressed as appropriate and which have been reprinted in appendix III. The current 1-800-MEDICARE contractor stated that the report was factually accurate and provided oral technical comments, which we incorporated as appropriate. The Social Security Administration and the departments of Defense, Education, and Treasury told us they had no comments on the draft report. In responding to our draft, CMS stated that it has taken steps to implement our recommendation. More specifically, CMS has identified an official responsible for the development of an LEP Plan for the agency that, when finalized, is intended to define responsibility for ensuring consistent and reliable LEP tracking and reporting. CMS also noted that the report identified issues related to accurate forecasts and wait times experienced by callers for general Medicare and claims calls. The agency reiterated that these issues were affected by the transition, completed in 2007, of claims calls previously answered by FFS contractors to 1-800-MEDICARE. CMS also stated that its intent is to try to fully answer callers’ questions rather than focusing exclusively on reducing average handle time—the average amount of time CSRs take to respond to callers’ inquiries. Additionally, CMS stated that while it has not specifically evaluated the number or proportion of LEP beneficiaries who are Medicare eligible and enrolled, it continues to review demand and consider other opportunities to best serve LEP callers. Finally, CMS noted that some information regarding the 1-800-MEDICARE contractor’s operations and contract with the agency are considered proprietary. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and others. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Elements of the Department of Health and Human Services’ Limited English Proficiency Plan “Each agency, program, and activity of the HHS [Department of Health and Human Services] will have in place mechanisms to assess, on a regular and consistent basis, the LEP status and language assistance needs of current and potential customers, as well as mechanisms to assess the agency’s capacity to meet these needs according to the elements of this plan.” “Each agency, program, and activity of HHS will arrange for the provision of oral language assistance in response to the needs of LEP customers, in both face-to-face and by telephone encounters.” “Each agency, program, and activity of HHS will provide vital documents in languages other than English where a significant number or percentage of the customers served or eligible to be served has . These written materials may include paper and electronic documents such as publications, notices, correspondence, web sites and signs.” “Each agency, program, and activity of HHS will have in place specific written policies and procedures related to each of the plan elements and designated staff who will be responsible for implementing activities related to these policies.” “Each agency, program, and activity of HHS will proactively inform LEP customers of the availability of free language assistance services through both oral and written notice, in his or her primary language.” “Each agency, program, and activity of HHS will train front-line and managerial staff on the policies and procedures of its language assistance activities.” “Each agency, program, and activity of HHS will institute procedures to assess the accessibility and quality of language assistance activities for LEP customers.” This appendix provides more detailed information on the average wait times experienced by callers depending on the type of inquiry—general Medicare and claims—and complexity of their call or the language in which they need assistance for the period September 2007 through July 2008 of the current contract. Figures 10 and 11 provide information on differences in average wait times experienced by callers depending on their type of inquiry—general Medicare and claims—and the complexity of their inquiry. While simple inquiries may be resolved by Tier 1 CSRs, inquiries of greater complexity may require the assistance of a Tier 2 CSR. Figures 12 through 14 compare the wait times for callers with simple inquiries to callers with complex inquiries by type of claims inquiry—Medicare Part A, Part B, and, within Part B, durable medical equipment (DME). Figures 15 through 18 provide information on differences in average wait times experienced by callers depending on the language in which they are assisted. The information provided compares the wait times of English- and Spanish-speaking callers for each type of inquiry, by complexity level. In addition to the person named above, Karen Doran, Assistant Director; Jennie F. Apter; Hernan Bozzolo; Eleanor M. Cambridge; Emily R. Gamble Gardiner; Barbara A. Hills; Martha R.W. Kelly; Ba Lin; Lisa S. Rogers; and Hemi Tewarson made key contributions to this report. Tax Administration: 2007 Filing Season Continues Trend of Improvement, but Opportunities to Reduce Costs and Increase Tax Compliance Should be Evaluated. GAO 08 38. Washington, D.C.: -- November 15, 2007. Tax Administration: Most Filing Season Services Continue to Improve, but Opportunities Exist for Additional Savings. GAO 07 27. Washington, -- D.C.: November 15, 2006. Federal Contact Centers: Mechanisms for Sharing Metrics and Oversight Practices along with Improved Data Needed. GAO 06 270. Washington, -- D.C.: February 8, 2006. Tax Administration: IRS Improved Some Filing Season Services, but Long-term Goals Would Help Manage Strategic Trade-offs. GAO 06 51-- Washington, D.C.: November 14, 2005. . Social Security Administration: Additional Actions Needed in Ongoing Efforts to Improve 800-Number Service. GAO 05 735. Washington, D.C.: -- August 8, 2005. Immigration Services: Better Contracting Practices Needed at Call Centers. GAO 05 526. Washington, D.C.: June 30, 2005. -- Tax Administration: IRS Improved Performance in the 2004 Filing Season, But Better Data on the Quality of Some Services Are Needed. GAO 05 67. Washington, D.C.: November 15, 2004. -- Tax Administration: IRS Needs to Further Refine Its Tax Filing Season Performance Measures. GAO 03 143. Washington, D.C.: November 22, -- 2002. IRS Telephone Assistance: Limited Progress and Missed Opportunities to Analyze Performance in the 2001 Filing Season. GAO 02 212. -- Washington, D.C.: December 7, 2001.
The Centers for Medicare & Medicaid Services (CMS) is responsible for providing beneficiaries timely and accurate information about Medicare. Receiving nearly 30 million calls in 2007, 1-800-MEDICARE, operated by a contractor, is the most common way members of the public get program information. The help line provides services both to English-speaking and limited English proficiency (LEP) callers. In this report, GAO describes (1) the extent to which access performance standards and targets have been met by the current contractor, (2) the efforts by CMS to provide LEP callers access to help line services and wait times experienced by these callers, and (3) CMS's oversight of callers' access to 1-800-MEDICARE and the information's accuracy. To conduct this work, GAO reviewed documents and analyzed help line data through July 2008. In addition, GAO interviewed agency staff, industry experts, and officials at four federal agencies with high call volume contact centers. The 1-800-MEDICARE contractor met most standards and some targets for the required telephone performance metrics and indicators CMS designed to ensure callers' access--from July 2007 through July 2008. The 1-800-MEDICARE contractor's performance met the standard for each of the three access-related metrics--the average amount of time callers wait to reach customer service representatives (CSR), the percent of unhandled calls, such as abandoned calls, and the percent of calls transferred among CSRs--in 10 of 13 months analyzed. Because of waivers granted by CMS, the contractor was considered by the agency to have met the standards in 12 of 13 months. During that time, the contractor met the target for only one of three access-related indicators--the percent of CSRs answering calls. Other indicators were the average amount of time needed to respond to callers' inquiries and the accuracy of CSR call volume forecasting. CMS's efforts to provide LEP callers with access have led to shorter average wait times for Spanish-speaking callers, but are not consistent with all elements of the HHS LEP Plan. CMS requires its help line contractor to provide services to Spanish-speaking callers by employing bilingual CSRs and to provide interpretation services for other LEP callers, which the contractor does by using telephone interpreters. In 20 of the 32 months reviewed, Spanish-speaking callers waited less time, on average, to reach a CSR than English-speaking callers. CMS officials with primary responsibility for 1-800-MEDICARE said they were not aware of the LEP Plan when awarding the current contract, and CMS has not identified an office responsible for acting as a point of contact for management of the LEP Plan. Without a responsible office or official, an internal control for federal agencies, CMS staff lack a source of guidance to assist them in taking steps consistent with the LEP Plan when considering the needs of people with LEP. However, CMS has taken steps consistent with some elements of the agency's adopted LEP Plan, such as the element related to oral language assistance, but not others, such as the element identifying the need for complaint mechanisms for language issues. To oversee 1-800-MEDICARE callers' access to services and accurate information, CMS uses all six commonly used contact center management practices. Based on GAO's review of the literature and interviews with federal agencies and industry experts, these management practices are: (1) clearly defining performance metrics, (2) performing accurate capacity planning, (3) conducting customer satisfaction surveys, (4) ensuring information for CSRs to reference is accurate, (5) evaluating CSRs' interaction with callers, and (6) validating contact center performance reports. These practices are addressed in the current 1-800-MEDICARE contract and reflected in CMS's ongoing contract oversight.
The WIC program was created in 1972 in response to growing evidence of poor nutrition and related health problems among low-income infants, children, and pregnant women. It is intended to serve as an adjunct to good health care during critical times of growth and development. In addition, WIC was designed to supplement the Food Stamp Program and other programs that distribute foods to needy families. Several population groups are eligible for the supplemental foods and nutrition services offered by WIC. Eligible groups include lower-income pregnant women, nonbreastfeeding women up to 6 months postpartum, breastfeeding women up to 1 year postpartum, infants, and children up to age 5 who are at nutritional risk. WIC provides cash grants to support program operations at 88 state-level WIC agencies, including those in all 50 states, American Samoa, the District of Columbia, Guam, Puerto Rico, the U.S. Virgin Islands, and 33 Indian tribal organizations. Food and NSA grants are allocated to the state agencies through a formula based on caseload, inflation, and poverty indices. Small amounts are also set aside and distributed, at USDA’s discretion, to fund updates to infrastructure—like the development of electronic benefit transfers—and to fund evaluations performed by state agencies. Some state-level agencies that operate the program at both the state and local levels retain all of their WIC grants. The remaining state-level agencies retain a portion (the national average is about one-quarter) of the funds for their state-level operations and distribute the remaining funds to nearly 1,800 local WIC agencies. In 1998, state and local WIC agencies relied primarily on their federal NSA grant funds to support their NSA operations. Although no state-matching requirement exists for federal WIC funding, some state WIC agencies have received supplemental funds from their state governments for NSA. Some state and local WIC agencies also receive in-kind contributions, such as office space, from nonfederal sources such as local governments and private nonprofit agencies. NSA grants cover the costs of providing various nutrition services— participant services, nutrition education, and breastfeeding promotion. Participant services include numerous activities such as determining eligibility, food benefit distribution, screening for up-to-date immunizations, and referrals to other health or social services. Each of these activities includes many processes. For instance, we reported in September 2000 that certification involves identifying income, participation in a qualifying program such as Medicaid, pregnancy or postpartum status, and medical or nutritional risks. The length of time that a person is certified to participate in the program typically ranges from 6 months to 1 year, depending on such factors as whether the participant is a woman, a child, or an infant. Nutrition education consists of individual or group education sessions and the provision of information and educational materials to WIC participants. Regulations require that the nutrition education bear a practical relationship to participant nutritional needs, household situations, and cultural preferences. Nutrition education is offered to all adult participants and to parents and guardians of infant or child participants, as well as child participants, whenever possible. It may be provided through the local agencies directly or through arrangements made with other agencies. Individual participants are not required to attend or participate in nutrition education activities to receive food benefits. Breastfeeding promotion activities focus on encouraging women to breastfeed and supporting those women who choose to breastfeed. Each local agency is required to designate a breastfeeding coordinator, and new staff members are required to receive training on breastfeeding promotion and support. WIC endorses breastfeeding as the preferred method of infant feeding. Although state agencies must operate within the bounds of federal guidelines, they have the flexibility to adjust program services to meet local needs. States can add program requirements. For example, in 1999, Montana required its local agencies to formally document referrals made to WIC participants, though this is not required by program regulation. States that utilize local agencies to provide nutrition services also provide these local agencies with some discretion in implementing the local program. This means that the specifics of the WIC program can vary from state to state and locality to locality. In 2001, USDA and the National Association of WIC Directors (NAWD) distributed revised Nutrition Service Standards that provide WIC agencies with guidelines on providing high-quality nutrition services. The WIC program faces the following challenges in delivering high-quality nutrition services: (1) coordinating its nutrition services with health and welfare programs undergoing considerable change; (2) responding to health and demographic changes in the low-income population that it serves; (3) recruiting and keeping a skilled staff; (4) improving the use of information technology to enhance service delivery and program management; (5) assessing the effect of nutrition services; and (6) meeting the increased program requirements without a corresponding increase in funding. Over the past decade, major changes in the nation’s health and welfare delivery systems have presented WIC agencies with the challenge of identifying and enrolling eligible participants and coordinating with other service providers in a new environment. More specifically, state Medicaid agencies’ increased reliance on private managed care organizations has reduced the service delivery role of local public health agencies, the entities with which WIC agencies have had a long-established relationship. As a result, WIC’s link to the health care system has been weakened, making it more difficult for WIC agencies to identify eligible individuals and coordinate services with their participants’ health care providers. Additionally, changes brought about by welfare reform—which include the elimination of Temporary Assistance for Needy Families (TANF), Food Stamp, and Medicaid benefits for many individuals including noncitizens— have decreased WIC’s ability to reach eligible individuals through these programs. Two recent and related changes in the health care system are presenting new challenges to WIC agencies in carrying out their referral, outreach, and coordination efforts. The first change is the rapid growth since 1991 in the percentage of Medicaid beneficiaries who are enrolled in managed care (see fig. 1). This increase in the percentage of Medicaid beneficiaries receiving health services from managed care providers contributed, in part, to the second change: the reduction or elimination of direct health care services by many local public health departments. According to a national survey of local health departments offering comprehensive primary care services in urban areas in 1995, about 20 percent stopped providing such services to women and children by 1999. Similarly, about 9.4 percent of those offering comprehensive primary care services to women in nonurban areas in 1995 stopped providing such services by 1999, and 15.5 percent of nonurban agencies stopped providing such services to children. With the reduction in the number of public health departments serving women and children, public health officials have increasingly turned to WIC to help address the health needs of low-income children. According to CDC, WIC has become the single largest point of access to health- related services for low-income preschool children. Consequently, the CDC has turned to WIC to provide services traditionally performed by local health departments, such as identifying children who are not fully immunized. These changes have several implications for WIC. Historically, many WIC participants have been able to receive health services, such as pediatric care, at the WIC sites. This proximity could facilitate the required link between WIC services and health care; health care providers could easily refer Medicaid and uninsured patients to the WIC program, and WIC staff could easily refer WIC participants to appropriate health care services. This arrangement also made it more convenient for participants to schedule appointments for both WIC and health services. However, as Medicaid managed care providers have increasingly replaced local public health clinics as providers of maternal and child health care, this link between WIC services and health care has weakened. The convenience for many WIC staff and participants of having WIC and health care services co-located has been lost. As a result, many WIC agencies must extend their outreach efforts to contact people, especially uninsured individuals not connected with the health care system, who are eligible for WIC. Given these changes, it will be a challenge for WIC to effectively coordinate its services with other health providers. Evidence already suggests that WIC agencies are struggling with this coordination. For example, a national survey conducted by the Women’s and Children’s Center at Emory University’s Rollins School of Public Health found that only 26 percent of state WIC agencies had made specific arrangements, such as developing formal guidance, for the collaboration of services between WIC and managed care providers in 2000. The Center published a resource guide to assist in the collaboration between WIC and managed health care. The guide identified several barriers to the coordination between WIC and managed care providers and provided descriptions of strategies that state and local WIC agencies can use to overcome such barriers, though it suggests that employing suggested strategies will increase staff responsibility and program costs. The barriers include the following: Lack of understanding. WIC staff do not understand the managed care system and managed care providers do not understand WIC. Lack of specific requirements. State Medicaid agencies may not have instituted specific contractual requirements for managed care organizations or providers to make referrals or supply needed information to WIC agencies. Communication difficulties. Managed care providers’ change in ownership has been accompanied by communication difficulties. The termination of Medicaid contracts with managed care providers and the location of some managed care provider headquarters in another state can also make communication difficult. Welfare reform, which made major changes to the nation’s social safety net, has also placed new demands on WIC’s client services and outreach. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. 104-193), which replaced the Aid to Families with Dependent Children with TANF, established a lifetime, 5-year time limit on the receipt of TANF benefits and required states to place work or work-related requirements on a percentage of households receiving TANF. The act also made several categories of noncitizens ineligible for TANF, food stamps, and Medicaid. Welfare reform has contributed to the decline in the participation in public assistance programs. Various studies, including those that we have conducted, have concluded that the implementation of the provisions of welfare reform is associated with the decline of eligible individuals enrolled in the Food Stamp Program and Medicaid. Although we did not identify any nationwide assessment of welfare reform’s impact on WIC participation, state and/or local WIC officials from all six of our case study agencies reported that welfare reform has decreased program participation by eligible individuals, including noncitizens and working women. Declining participation in assistance programs may complicate WIC client services, such as making eligibility and referral determinations. Individuals who receive TANF, food stamps, or Medicaid automatically meet WIC’s income eligibility requirement—documentation of their enrollment in one or all of these programs is sufficient proof that they qualify financially for WIC. However, as the number of WIC applicants who are enrolled in these programs decreases, WIC staff members may need to spend more time collecting and reviewing other documents to determine whether applicants meet income eligibility requirements. Moreover, the responsibility of WIC staff to make appropriate referrals to other programs, both public and private, may grow at those agencies where WIC has become a gateway to the social safety net for low-income individuals. Restrictions on providing welfare benefits to noncitizens may require WIC to increase its outreach efforts among these groups. With welfare reform, several categories of noncitizens are no longer eligible for TANF, food stamps, or Medicaid. However, noncitizens continue to be eligible for the WIC program. The National Advisory Council on Maternal, Infant and Fetal Nutrition, as well as WIC officials from several of our case studies, suggested that noncitizens may fear that participating in WIC could threaten their immigration status. Welfare reform’s emphasis on work has created the challenge of making WIC services accessible to a population with new demands on their time. In five of our six case study sites, WIC officials attributed declines in WIC participation, in part, to the increase in the number of women who were working or attending school due to welfare reform. At three case study sites, WIC officials indicated that the increasing numbers of working women placed increased pressure on WIC agencies to offer WIC services outside of normal working hours. Increasing access, which may involve offering evening or weekend hours, can result in higher costs to the WIC program. WIC faces the challenge of responding to changes in the health and demographics of its participants and potential participants. The WIC population, like the general population, has experienced a dramatic increase in the prevalence of overweight and obesity and related diseases, such as diabetes. In addition, demographic changes, such as increases in WIC’s ethnic population, have occurred during recent years. These changes have placed demands on WIC agencies to play a more active role in helping to treat and prevent nutrition-related health problems and adapting nutrition services to the evolving needs of program participants. The nation’s population has experienced a dramatic increase in the prevalence of overweight and obesity in recent years. According to the CDC, the prevalence of overweight and obesity has reached epidemic proportions. For example, the prevalence of overweight adults increased over 60 percent between 1991 and 2000. Research suggests that the prevalence of overweight and obesity is even higher among individuals who are low-income, a characteristic of the WIC population. The surge in the prevalence of overweight and obesity is not limited to adults. According to the CDC pediatric nutrition surveillance data, which are collected primarily from the WIC program, the prevalence of overweight children age 2 and older (but younger than 5), increased by almost 36 percent from 1989 to 1999. In 1999, almost 10 percent of children in this age group were overweight or obese. Some children are at even greater risk. Hispanic children, a growing segment of the WIC population, had the second highest prevalence of being overweight according to the 1999 CDC pediatric surveillance data. For both adults and children, being overweight and obese is associated with a variety of health problems, including diabetes, heart disease, and some types of cancer. As the prevalence of overweight and obesity has increased, research suggests that the incidences of diabetes during pregnancy and diabetes in adults have also increased. Recognition of this epidemic, particularly its effect on low-income women and children, has increased the pressure on WIC agencies to adapt their nutrition services to help prevent and treat overweight, obesity, and related health problems. In addition to helping to respond to this epidemic, WIC must continue to serve low-income women and children who are susceptible to other diseases, some new and some long-standing, such as anemia, HIV/AIDS, elevated levels of lead in blood, and tooth decay. The nutrition education and breastfeeding promotion activities provide an opportunity for WIC staff to help participants prevent these diseases. However, WIC faces several obstacles—such as limited time and resources—in adapting its nutrition education to respond to these new and long-standing health issues. WIC staff has limited time to provide the type of counseling needed to discuss disease prevention. Our study of six local WIC agencies found that individual nutrition education sessions did not last long, ranging from an average of 4 minutes to 17 minutes among the six agencies. In addition, WIC regulations require only two nutrition education contacts during each 6-month WIC certification period. It is difficult to help prevent numerous nutrition-related diseases with a few brief nutrition education sessions. WIC nutrition education was originally intended, according to USDA officials, to provide a relatively basic message about the value of good nutrition to low-income pregnant and postpartum women whose diets were inadequate. To help address more complex nutrition problems, such as obesity, according to a CDC expert on nutrition, WIC’s nutrition education needs to be fundamentally changed in several ways. This expert indicated that nutrition education has focused traditionally on advising families to eat more fruits and vegetables. He suggests it now needs to focus more on teaching parents that they need to be responsible for the types of food offered to their children and let children decide how much to eat. In addition, the CDC expert indicated that the scope of nutrition education needed to be expanded to include such topics as physical activity, television viewing, and fast foods. Local WIC agencies tend to rely on two techniques to provide nutrition education. According to a 1998 USDA survey, over three-quarters of local WIC agencies always used counseling/discussion and written materials to provide nutrition education. Less than 10 percent of the agencies in the survey reported using other techniques such as food tasting or videos to provide nutrition education. Several experts have suggested that WIC agencies need to use multiple teaching techniques. They also suggested that these techniques be tailored to each participant and that the participant be included in designing the education that best meets his or her needs. While USDA has undertaken several initiatives, existing resources appear to limit the program’s ability to address emerging health issues. To develop and implement a response to diseases such as obesity, WIC would need to devote additional resources to nutrition education, according to CDC and USDA officials. Devoting resources to address new health issues may come at the expense of other program priorities. In addition, current WIC program regulations on the use of resources may limit the effectiveness of the response to some emerging health issues. For example, costs associated with providing physical activity classes and equipment, which appear to be important in addressing weight problems, are not allowable expenditures. Any strategies that WIC employs to address health issues such as obesity would have to contend with some formidable social forces. Two of these forces are the prevalence of advertising and the decrease in physical activity. Advertising has a significant impact on eating behaviors. For example, one study found that 1 or 2 exposures to advertisements of 10 to 30 seconds could influence preschool children to choose low-nutrition foods. Research also shows that several environmental trends, such as increased television viewing and increased consumption of fast foods, have contributed to obesity nationally. According to government statistics, numerous changes in the demographics of the nation’s population have occurred during the 1990s. Several of these changes—shifts in the population’s ethnic composition, increases in the number of working women, and the growing number of preschool children enrolled in daycare—were also seen in the WIC-eligible population. WIC is faced with the challenge of responding to each of these changes. Over the years, the ethnic composition of the WIC population has changed. In 1988, almost half of WIC participants were white and over one-quarter were African-American. The composition began to change in the mid- 1990s when the number of Hispanic WIC participants began to grow. Between 1994 and 1998, the percentage of WIC participants who were Hispanic increased from 26 percent to 32 percent. During the same period, the percentage of WIC participants who were African-American declined from about 25 to 23 percent, while there were only slight changes among other racial or ethnic groups. Some WIC agencies serve more ethnically diverse communities than others. For example, three of our five local case study agencies served predominantly white communities, while two agencies served very diverse populations. One local agency director reported that less than one-quarter of their WIC participants spoke English as a primary language. As a result of the changing make-up of WIC’s participant population, WIC agencies are faced with the challenge of providing nutrition services that are culturally and ethnically appropriate, as the program requires. Recent data suggests that WIC agencies offer nutrition education in several languages. Over half of the local agencies responding to a 1998 USDA survey indicated that nutrition education was available in Spanish.Providing nutrition education and other services in a foreign language requires agencies to employ staff members who speak languages other than English or pay for interpreter services which can be costly. In addition, USDA and state and local WIC agencies have developed teaching materials, such as brochures, in foreign languages. WIC agencies may need to increase staff awareness of the different nutritional needs and preferences of the various ethnic and cultural groups that they serve. For example, research conducted in the early 1990s involving urban African-American WIC mothers suggested a tendency to introduce infants to solid food in the first few weeks of life, rather than waiting 4 to 6 months, as recommended. This practice occurred despite receiving WIC counseling and educational materials. Understanding the distinctive nutritional preferences of participant groups requires WIC staff to dedicate time to studying different cultures and related health and nutrition research, a particularly challenging task for WIC agencies that serve several ethnic or cultural groups. As composition of the WIC population has changed, the percentage of women in the WIC program who work has increased, according to some state WIC officials. In 1998, about 25 percent of women who were certified or certified a child for the WIC program were employed, according to data provided by USDA. While no data exist on the change in recent years in the percentage of women participants who are working, data from Bureau of Labor Statistics suggest that work activity has increased in low-income households with children. Between 1990 and 1999, the percent of children living below the poverty level in families maintained by two parents with at least one parent employed full-time increased from 44 to 52 percent. The percent of poor children living in families maintained by a single mother employed full-time increased from 9 to 18 percent. To respond to the increase in working WIC families, WIC agencies are faced with the challenge of making nutrition services accessible to individuals with greater constraints on their time. Some WIC agencies have offered services that accommodate individuals who keep traditional work hours. For example, 26 percent of the local WIC agencies responding to USDA’s 1998 survey indicated that they offered extended hours, such as evening or weekend hours; fewer than 3 percent had mobile facilities that could potentially visit work or community sites. Four of our five local case study agencies offered extended hours on a few days each month, either in the evenings or on weekends, for a few hours. Several factors may limit the ability of local agencies to improve access to services for participants who work. First, local agencies may lack the resources to pay for the staff or the security needed to have their sites open during evening or weekend hours. Second, federal regulations generally require participants to pick up vouchers in person when they are scheduled for nutrition education or for recertification, which limits WIC agencies’ ability to employ other strategies such as mailing vouchers to participants’ homes. Third, providing WIC services at nontraditional locations, such as grocery stores, that may be more convenient for those who work, may infringe on the participants’ privacy and present a conflict of interest. The increase in the number of WIC participants who work will make attaining some of WIC’s goals, such as increasing breastfeeding, a greater challenge. Employer policies can affect the length of time a woman employee breastfeeds. One study found that the duration of the work leave significantly contributed to the duration of breastfeeding. In addition, businesses that employ WIC mothers may not provide accommodations that support daily breastfeeding needs. A 1996 survey of over 500 WIC mothers found that less than 2 percent of those who went to work or school reported having such accommodations, such as the ability to bring a baby with them or being provided facilities for breastfeeding. In 2000, WIC mothers who worked full-time had the lowest breastfeeding rate for infants at 6 months of any category of WIC mothers, even though they initiated breastfeeding in the hospital at about the same rate as other mothers. To respond to this challenge, WIC staff might need to work with employers and schools to encourage the adoption of procedures and facilities that support breastfeeding among employees and students. As a result of the increase in the number of working parents, low-income children are increasingly placed in daycare. In a recent study, we concluded that since the implementation of TANF, more low-income children were in care outside the home and were in this care earlier in their lives. Children who are in daycare may be unable to accompany their parents to WIC office visits for vouchers and nutrition education. As a result, WIC staff may have little opportunity to provide age-appropriate nutrition education directed at preschoolers, though evidence suggests such education contributes to positive eating behaviors. According to USDA’s 1998 survey, only about 38 percent of local WIC agencies provided nutrition education directed to WIC preschoolers. Since meals and snacks are usually provided in daycare settings, daycare providers play an important role in shaping the nutritional behavior of preschoolers. As more low-income preschoolers enter daycare, WIC may need to explore ways to broaden its nutrition education efforts to include the daycare providers serving WIC children more systematically. WIC faces the challenge of maintaining a skilled staff. The quality of nutrition services depends, to a large degree, on the skills of the staff delivering the services at the local WIC agencies. Yet, due in part to the widespread difficulty in hiring professionals, local agencies are increasingly relying on paraprofessionals to provide services. At the same time, social and systemic changes have heightened the need for WIC staff to learn new skills. However, investing in training is difficult for agencies with limited resources. Possible solutions to address WIC’s staffing and training needs are unclear because the staffing needs have not been assessed and there is not a defined commitment to training. Many local WIC agencies recently reported an insufficient number of professional staff and difficulty acquiring professional staff members. A 1998 USDA survey found that 30 percent of local WIC agencies serving over 40 percent of WIC participants reported having too few professional staff members. About half of all WIC agencies reported having difficulty recruiting and hiring professional staff. We estimated, based on information obtained from our survey of local WIC agencies, that in fiscal year 1998 between 5 percent and 15 percent of local WIC agencies did not have a nutritionist or dietitian on staff. The shortage of professional staff at WIC agencies is influenced by several factors, some of which are external to the WIC program. The most commonly reported difficulty associated with recruiting and hiring professional staff was that the salaries and/or benefits were not competitive. Another commonly reported difficulty was the lack of qualified applicants. According to a director of the American Dietetic Association, several factors may negatively affect the ability of WIC agencies to recruit registered dietitians, including the mundane nature of the work and the rural location of many agencies. The shortage in professional staff may worsen in the coming years. According to the Association director, who is also a state WIC director, WIC’s workforce is aging and a large number of professionals are expected to retire in the next few years. Many local agencies are relying more on paraprofessionals to provide nutrition services. According to data from USDA surveys, paraprofessionals now perform tasks that were once performed by professionals. In 1988, fewer than 2 percent of local agencies reported using paraprofessionals to provide nutrition education to high-risk participants and between 11 and 18 percent reported using them to provide nutrition education to low-risk participants. By 1998, this had changed considerably. That year, about 17 percent of agencies used paraprofessionals, along with professionals, to provide nutrition education to high-risk individuals and between 42 percent and 50 percent used them to provide nutrition education to low-risk individuals. The shift towards a greater reliance on paraprofessionals may be attributed to several factors. The difficulty in hiring professionals and the foreign language skills more often possessed by paraprofessionals may both play a role in this phenomenon. In addition, USDA officials pointed out that the required qualifications for competent professional authorities, who provide nutrition services, are “ridiculously low.” Consequently, WIC agencies are able to hire paraprofessionals to positions previously filled by professionals. As a result of the increased reliance on paraprofessionals, USDA officials and other experts have become concerned that the quality of nutrition services will suffer. The types of services that agencies offer may become increasingly limited without staff whose qualifications support a full range of services. Already, some WIC agencies have limited the services they provide. For example, in Montana where some local WIC agencies did not have registered dietitians on staff, state policy in 1999 prohibited all local WIC agencies from providing the type of nutrition counseling needed to address conditions such as gestational diabetes. According to a local agency director, not only did this restriction affect the quality of services provided to participants, but also it was a disincentive for registered dietitians to apply for WIC jobs because it limited their ability to use their skills. Given the changes in the WIC population and the environment in which the program operates, WIC agencies face an increased challenge of ensuring that their staff have the skills and knowledge to provide effective nutrition services. Many WIC staff may not have the skills and knowledge necessary to meet new client needs. For example, CDC, USDA, and other experts suggest that WIC staff currently lack the skills to address some emerging complex health issues, such as obesity. In addition, WIC staff may not have the knowledge to navigate the new environment introduced by changes in the health and welfare system. For example, the Emory University Rollins School of Public Health publication has suggested WIC staffs’ lack of understanding of the managed health care system has posed a barrier to effective coordination with managed care providers. To help address this lack of skills and knowledge on the part of WIC staff, more training may be needed. According to a CDC nutrition expert, to address emerging health problems, staff must learn to assess participants’ willingness to improve their eating practices and to tailor education to improve participants’ behaviors. In addition, WIC staff needs extra information to provide services in a changing social service environment. For example, they need to understand new requirements with which their participants must comply in order to obtain health care services from managed care providers. While WIC regulations require that state agencies provide in-service training and technical assistance to professional and paraprofessional staff involved in providing nutrition education, USDA officials indicated that no defined commitment has been made to improve the training opportunities for WIC staff. Without such a commitment, some local WIC agencies may be less inclined to invest limited staff time or funding in training or continuing education. For example, one case study agency reported that, because funding constraints left the agency short-staffed, professional staff were performing more clerical duties and had little time for professional development. Another local WIC agency director indicated that her program could not afford to have her attend an annual NAWD conference, even though the conference was being held locally. USDA has no current data about the size and composition of the WIC workforce, a situation that makes addressing staffing and training problems difficult because little is known about the exact nature of the staffing problems. Until 1991, USDA did collect some detailed WIC staffing data for its annual report of WIC administrative expenditures. However, according to USDA officials, one of the reasons the agency stopped collecting these data was to reduce the reporting burden on WIC agencies. While surveys of local agencies conducted for the biannual participant and program characteristics study in 1996 and 1998 gathered some limited data regarding the sufficiency of staff levels, there has been no recent study on the size and composition of the WIC workforce. The lack of data regarding the WIC workforce can present a barrier to developing and implementing strategies to address the workforce challenges facing the program. For example, in 1996 the National Advisory Council on Maternal, Infant, and Fetal Nutrition recommended that USDA explore with HHS revising the National Health Service Corps programs to include nutrition services as a designated “primary health service.” This change would allow federal funds to be used to recruit and train registered dietitians and nutritionists to work in under-served areas. To do this, however, USDA needed data showing sufficient demand for registered dietitians and nutritionists in under-served communities. Although the Council repeated its recommendation in 2000, to date USDA has not collected data regarding the need for public health nutritionists in under- served areas. USDA is sponsoring a survey of the public health nutrition workforce. The survey results, expected to be published in 2002, will include a description of the qualifications, training needs, and other characteristics of the 1999-2000 WIC workforce. However, the survey will not provide information on the demand for dietitians and nutritionists in under-served areas. State and local WIC agencies are faced with the challenge of delivering participant services and managing program operations with outdated or unavailable information technology resources. More than half of state WIC agencies have management information systems that are not capable of automatically performing all the program tasks considered essential by USDA. In addition, while 16 states have been involved in the testing of electronic transfer of WIC benefits, only one statewide system has been implemented. Finally, almost one-fourth of the state WIC agencies, along with hundreds of local WIC agencies, do not have Internet access, limiting their ability to use online resources and communicate with other providers of nutrition and health services. According to a March 2001 USDA report, 56 percent of state WIC agency automated management information systems were not capable of performing, or efficiently performing, 1 or more of 19 essential program tasks. (A listing of the 19 essential program tasks is provided in appendix II.) These tasks were singled out as basic functions that were essential for state agencies to automate in order to attain efficient program operations. For example, management information systems should be able to automatically assess whether an applicant’s income exceeds the maximum income level for eligibility based on data entered into the system. The system should also be able to produce food checks corresponding to the participant’s most recent food prescription at the time the participant is present to pick up the checks at the local clinic and to detect suspicious grocery store food coupon redemption activity. The inability of WIC state agencies’ automated management systems to perform essential tasks can encumber agencies’ ability to efficiently administer program operations. For example, at a local WIC agency in Pennsylvania, we found that the staff was using hand-written index cards to keep track of participant information because they lacked a sufficient number of computers to perform that function. Also, the director at this agency had to spend 6 hours each month manually counting the number of participants in the program to generate the monthly participation report required by the state. This was necessary because the agency’s management information system was not capable of automatically preparing the report. A California WIC official told us that it was difficult for local WIC agencies’ automated systems to create special reports. Because the reports could take up to several months to complete, some agencies opted not to generate them. A USDA official told us that the poor quality of automated systems in some states negatively affects federal and state efforts to monitor WIC agencies. Because of computer inadequacies, some states have not been able to provide USDA with requested data on breastfeeding initiation rates, hampering officials’ ability to assess the effectiveness of breastfeeding promotion. Most states face one or more of the following obstacles that make it difficult to bring their automated systems up to the basic level of functionality: Limited funds. States must meet their management information needs almost entirely from their federal NSA grants. Other funds typically available from outside sources to help defray WIC costs, including those associated with information systems, have declined over the last decade.According to USDA, the cost of bringing WIC’s essential program tasks up to standard in all states over the next 6 years is between $147 million and $267 million. Outdated technology. According to USDA and other federal studies, the life cycle for a WIC automated system is 7 years. After that time, the states’ systems do not lend themselves easily, if at all, to technological advances. About 34 percent of WIC state-level agencies have automated systems that have exceeded their life cycle, 28 percent have systems that will exceed their life cycles in 1 to 3 years, and 38 percent have systems with 4 or more years remaining in their life cycles. Coordination with other systems. WIC was designed to operate in conjunction with programs offered by other social and health-related service agencies. Changes that have occurred in these programs have complicated the ability of WIC program managers to define the functions that their automated systems must support and to identify the system requirements, including the necessary applications and hardware needed to effectively coordinate WIC with other programs. Lack of information technology staff. State and local WIC agencies have difficulty competing with the salaries and benefits offered by private sector employers. This can affect their ability to recruit and retain qualified information technology staff needed to develop and maintain their automated systems. Currently, most WIC food transactions involve paper checks. However, concerns have been raised about the cost to grocers of processing checks and the inconvenience they present to WIC participants. Electronic benefits transfer (EBT), an automated process that allows food to be paid for electronically, offers an alternative to paper checks. With EBT, participants are given a plastic card, similar to a credit or debit card, containing their food benefit prescription to purchase benefits at the grocer’s checkout. USDA and state WIC agencies are exploring the use of EBT in the WIC program to improve the benefit delivery process. Paper checks have a number of drawbacks. A 2000 Food Marketing Institute study that compared the use of WIC’s paper checks for the purchase of food to other methods—including cash, checks, credit and debit cards, food stamps, and EBT—found that WIC checks are among the most costly payment methods for food retailers. The study indicated that the primary reasons for this higher cost are that store staff take more time to process paper checks when goods are purchased and to prepare checks for bank deposit. In addition to high costs, paper checks can cause confusion and delays for both the participant-shopper and the store clerk at the checkout counter and result in unwanted attention. Thus far, EBT for WIC has proven to be much more expensive than paper for states testing this evolving technology, according to USDA officials. However, compared to the use of WIC paper checks, EBT is less expensive for food retailers because it reduces handling costs. In addition, EBT can provide participants with greater flexibility in purchasing food. For example, it will allow them to purchase their benefits in quantities as needed within their issuance period. With paper checks, a participant must purchase all items on the food instrument when shopping or forfeit the benefit. EBT can also provide state officials with documentation of WIC purchases for submitting rebate claims to food manufacturers. By tying EBT to a product code of authorized WIC foods, the program has assurance that participants purchase the prescribed foods and do not improperly substitute foods. EBT may also curtail the waste, fraud, and abuse that can occur with paper checks. USDA is exploring the use of EBT to eliminate the need for paper checks. Since 1991, the agency has provided a total of about $22 million for demonstration projects involving 16 states to explore the use of EBT technology for the delivery of WIC benefits. However, no one knows how soon the widespread use of EBT will be realized in each state, or exactly what form the new issuance system will take. As of October 2001, only Wyoming had implemented a statewide WIC EBT system. Federal legislation, developments in the food retail and electronic funds transfer industries, and emerging technologies will shape the timing and nature of EBT implementation. According to USDA officials, WIC had two overall concerns in venturing into EBT: the technical feasibility and affordability of implementing EBT systems. In the few state projects where EBT has been tested, the first concern has been addressed—EBT is technically feasible. However, so far its affordability for use in WIC remains elusive. According to USDA officials, EBT costs are far beyond what most states can afford within their available NSA funds. WIC agencies would need to modify their NSA funding priorities or find new sources of funds to support their EBT projects. USDA officials also told us that these costs have had to be funded by federal grants at the sacrifice of other competing program priorities. Furthermore, because EBT processes differ in so many respects from those involving paper checks, agencies may face some of the following obstacles in implementing EBT: Limited federal funds. The potential cost of starting up and operating EBT is an issue of considerable importance to all state and local WIC agencies. These costs may not be covered by their NSA funds allocated for technology expenditures. As a result, WIC agencies would need to modify their NSA funding priorities or find new sources of funds to support their EBT projects. Outdated technology. Some local WIC agencies are unable to use EBT because they do not have computers, or they have computers that are unable to accommodate the necessary technology. WIC computer equipment must have the processing speed and communications capability to electronically transmit EBT data. In addition, software changes may also be needed to enable older systems to operate in conjunction with EBT. Lack of an industrywide standard. An industrywide standard for EBT systems that could be used for WIC transactions has not yet emerged. The various EBT technologies must be compatible with retailers’ normal transaction systems to perform the purchase function. The integration of different EBT technologies requires a common operating system standard, such as those used by credit card companies. The absence of such a common nationwide standard makes the widespread development of EBT applications very difficult. The Internet can be used by federal, state, and local agencies for a variety of purposes related to the WIC program. USDA uses the Internet to provide state and local WIC agencies with program information, such as eligibility guidelines, application instructions, program funding, participation rates, and current laws and regulations. USDA also uses the Internet to provide research and training to health and nutrition professionals, including those outside of WIC. USDA has plans to use the Internet to disseminate information to help reduce program fraud and to collect information directly from grocery stores participating in the WIC program. About half of the state agencies and some local agencies that have Internet access have established Web sites for their WIC programs. These sites have been used to provide information—including eligibility guidelines, application procedures, program benefits, and clinic locations—to WIC participants and potential applicants. In addition, some local WIC agencies use the Internet to e-mail state agencies and obtain or provide information on nutrition activities and services. According to USDA, 68 of the state-level WIC agencies had the capability to access the Internet as of July 2001. The capability of local WIC agencies to access the Internet is more difficult to ascertain. However, according to the Director of the National Association of WIC Directors, about half of their 600 local agency members currently have the ability to access the Internet. While the Internet is being used extensively by USDA and many state and some local agencies, the following obstacles have discouraged or prevented some state and local WIC agencies from obtaining Internet access: Limited funds. Accessing the Internet requires the necessary computer equipment that many local WIC agencies and/or their clinics do not possess. The costs of computer installation must compete against other WIC funding demands, such as salaries, utilities, and supplies. Even with the necessary computer equipment, local WIC agencies and/or their clinics may choose to forgo Internet use in some areas because they may have to pay costly long distance charges for the telephone connections to the Internet provider from funds that are competing with other more essential program needs. Security concerns. Although local agencies may have the computer capability to access the Internet, concerns regarding the security vulnerabilities inherent with the use of the Internet, including unauthorized access to files and hostile ‘virus’ attacks on computer systems, may discourage its use. For example, the Pennsylvania WIC agency prohibits Internet connections by its local agencies primarily because of concerns regarding the potential harm that could result from the improper access to sensitive personal information gained by unauthorized persons. In attempting to be responsive to recent requests from the Congress and others, WIC faces the challenge of assessing the effects of providing specific nutrition services. According to USDA officials, the focus on assessing the effects of specific nutrition services is a shift from the early years of WIC when assessments focused on the outcomes associated with overall program goals, such as reducing national rates of anemia, infant mortality, and low birth weight. In order to assess the effects of specific nutrition services, such as nutrition education, USDA needs good outcome measures for each service, consistent information from states regarding the attainment of goals and objectives for each service, and reliable research on the effectiveness of each service. However, to date, the agency has been able to collect data on only one outcome measure related to breastfeeding promotion and support. In addition, USDA has obtained inconsistent data on state goals and objectives and limited information from research studies on the effectiveness of specific nutrition services. To meet the Government Performance and Results Act requirements, USDA has attempted to develop national outcome measures that would allow the agency to determine the effectiveness of WIC’s nutritional services. To date, USDA has had limited success in establishing national outcome measures for WIC’s three key nutrition services—nutrition education, breastfeeding promotion and support, and health referrals. USDA has been able to collect information on only one outcome measure: breastfeeding initiation rate. This measure helps determine the effectiveness of a single nutrition service, breastfeeding promotion and support. Not only is this outcome measure relevant to only one nutrition service, but it also looks at a limited aspect of this service. The breastfeeding initiation rate examines only one of several important aspects of the service’s possible impact on breastfeeding. It does not measure the length of time that WIC mothers breastfeed infants because, despite USDA’ effort to collect data on the duration of breastfeeding, most state agencies were unable to give the agency complete information on this measure. In addition, USDA was unable to collect data on an outcome measure that would determine the percentage of WIC infants’ daily nutrition obtained through breastfeeding because the agency was unable to identify a viable way to collect these data. Although USDA has identified outcome measures for other nutrition services, obstacles have hindered the agency’s success in collecting relevant data. These obstacles include difficulties in identifying the type of data to collect because many variables may be influencing outcomes. For example, there are several other state and local programs that, like WIC, are aimed at improving health through nutrition education. Separating the effects of these efforts from those of the WIC program is difficult at best. USDA has also had few resources to collect appropriate data on measures it identifies. As a result, USDA is unable to implement most outcome measures. USDA’s difficulties in measuring WIC outcomes are not unique. In a previous study, we found that programs that do not deliver a readily measurable product or service or are intergovernmental grant programs have difficulty producing performance measures. As NSA grant recipients, state agencies are required to describe their goals and objectives for improving program operations in their annual program plan given to USDA. However, we found that for several reasons, this information does not provide USDA the data necessary to describe the extent to which WIC is meeting its intended NSA goals. First, no requirement exists that state goals and objectives be reported in a consistent format to USDA. Without consistent information, it is difficult for USDA to aggregate reported state performance information on a regional or national basis. Second, there is no requirement that the goals or objectives be measurable. Our review of a sample of over 400 state goals and objectives for nutrition services from 25 state WIC agencies revealed that over half lacked key information, such as baseline or target values, needed to measure progress toward improving program operations. Third, we observed that the specificity in the description of the goals or objectives varied significantly. For example, some objectives were short, general statements such as, “continue to improve the data integrity of the WIC data warehouse.” Other objectives were very detailed, including such information as the activities undertaken to achieve the objective. Moreover, a wide range existed in the number of goals or objectives identified. For instance, one state had 2 goals and 2 objectives, while another state had 13 goals and no objectives, and still a third had no goals and 24 objectives. Last, unlike the Department of Health and Human Services’ (HHS) Maternal and Child Health Services Block Grant Program, state WIC goals and objectives are not readily available for review, nor is progress toward the goals automatically tracked. As of late 2000, USDA had not compiled the state goals and objectives. Nor did it have the capability to do so easily. The ability to automatically track outcomes appears to be limited, in part, by data collection at the state-level agencies. For example, according to USDA officials, fewer than half of the state-level agencies were able to provide sufficient data on the duration of breastfeeding because the automated information systems did not contain complete data on each participant. Few research studies exist on the effects of specific nutrition services. In a prior report, we identified seven such studies published between 1995 and 2000. Four of the studies examined the impact of breastfeeding promotion and support, two focused on health care referrals, and one examined both nutrition education and breastfeeding promotion and support. However, the results of these studies provide few, if any, insights into the effects of specific WIC nutrition services. One reason so few successful impact studies exist is the difficulty many researchers face in conducting them. Researchers encounter difficulties because of the following: Data constraints. We found that the nature of available data severely limited the usefulness of several of the impact studies of WIC nutritional services. The three major sources of WIC data are USDA’s WIC Participant and Program Characteristics (PC) data, and CDC’s Pediatric Nutrition Surveillance System (PedNSS) and CDC’s Pregnancy Nutrition Surveillance System (PNSS). The PC data, which has been collected every 2 years since 1988, provides a snapshot of the characteristics of WIC enrollees at the time data are collected. The PedNSS and PNSS annually track the health status of children and the risk factors of mothers who participate in selected federal programs, including WIC. Since none of these data sources currently track the same individuals over time or collect information on the types of services that individual participants receive, researchers cannot use the data to associate WIC services with changes in participant characteristics. In addition, the available data from other national surveys may be too old to reflect current demographics or services. Research design. Research design can be problematic. To determine the effect of services, research must assess the extent to which program interventions impact its participants. To do this, other possible influences must be excluded, a task that is best accomplished through the use of random assignment whereby individuals are randomly placed in either a group receiving program services or a group denied program services.Research studies that employ random assignment can be problematic because some children will be denied program services. This is especially challenging for a program like WIC that has enough funds to serve all qualified applicants. Program variation. WIC agencies can provide their services differently, a fact that complicates drawing broad conclusions about services’ effects. Because WIC is a grant program, state agencies are given the discretion to implement key program elements, such as the content of nutrition education, in a way that suits local needs. This discretion can lead to substantial variation in the services that WIC participants receive. Lack of funding. The lack of sufficient funding, according to USDA and CDC officials, is another factor that makes it difficult to conduct WIC- related research. Before 1998, USDA spent about $3.5 million annually on WIC-related research—an amount that was insufficient to collect the primary data and conduct the complex research necessary to assess the effect of WIC services, according to USDA and CDC officials. This problem is not unique to USDA. In 1996, we surveyed 13 federal departments and 10 independent federal agencies and found that relatively small amounts of resources were allocated for conducting program evaluations in fiscal year 1995 and these resources were unevenly distributed across the agencies. WIC has been faced with the challenge of meeting additional program requirements with available resources. Since the late 1980s, a number of requirements have been placed on the program aimed at, among other things, containing the cost of food benefits, promoting breastfeeding, encouraging immunizations, and controlling program abuse. While these requirements have placed additional service delivery and administrative demands on WIC staff, they have not been accompanied by more funding per participant; the NSA grant per participant was established in 1989 and since then has only been adjusted for inflation. There is also evidence that nonfederal support for NSA may have decreased since fiscal year 1992. Nor have the additional demands been offset by reductions in other responsibilities. As a result, WIC agencies have had to cut costs and make changes in service delivery that potentially will have a negative impact on the quality of WIC services. Since the late 1980s, new requirements placed on the WIC program have directly affected service delivery and program administration. Table 1 shows some of the major federal requirements added since 1988 and the associated service and administrative responsibilities. Little is known about how much meeting these additional requirements will cost the program. Costs have been estimated for only two of these requirements. USDA estimated that strengthening vendor monitoring would cost states and local agencies about $7 million annually. The National Association of WIC Directors estimated that increasing the emphasis on immunization education, documentation, and referrals could cost as much as about $37 million annually. Officials from the CDC agreed with NAWD’s cost estimate. In recognition of the increased demands that have been placed on the program, the Congress in recent years has reduced some requirements. However, according to USDA officials, these reductions do not offset the additional requirements. The reductions have generally been administrative in nature and have had little or no impact on the services provided directly to WIC participants. For example, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. 104- 193) reduced some of the burden associated with the submission of annual program plans. States are no longer required to submit a full program plan each year; rather, after a submitted plan is approved, a state submits only substantive changes in subsequent years. Federal mandates are not the only source of increased demands placed on the program. State WIC agency officials have considerable flexibility to impose additional program requirements in their states. To contain the cost of food, state officials have imposed a variety of limitations on the food WIC participants in their states can select. For example, some states require participants to purchase the lowest cost brand of an approved food item. Such requirements place administrative demands on NSA resources because local agency officials must monitor retailer and participant compliance with selection limitations. In addition, such requirements can increase the amount of time needed to explain food selection limitations to participants, reducing the time spent on needed nutrition education or counseling. Each year, USDA must use a national per participant NSA grant amount, set by law, to determine the funding to be used for food and NSA grants.This per participant grant amount is based on the national average of NSA grant expenditures that was made per participant per month in 1987, only adjusted for inflation. In fiscal year 2001, grant levels were based on a national average of $12.27 per participant per month. Before the average NSA grant per participant was used, funding for NSA was set at 20 percent of the total WIC appropriation. Since then, the percentage of federal WIC funds dedicated to NSA has increased to about 27 percent—perhaps giving the impression that, with such a substantial portion of program funds, NSA funds are sufficient to cover the costs of additional responsibilities. However, this increase is not the result of more funds per participant being dedicated to NSA; rather, it is the result of a decrease in the amount of federal funds needed to cover the food purchasing portion of the program. Food costs have been dramatically reduced by the infant formula rebates, in which companies reimburse the WIC program a percentage of the cost of every can of formula purchased by program participants. UDSA projects that in fiscal year 2001, savings from infant formula rebates will total about $1.5 billion. This amount covers the cost of about 28 percent of food benefits provided to participants. If rebate savings are considered, NSA has remained roughly 20 percent of total program costs from 1988 through 1999. Figure 2 shows the percentage of program funds spent on NSA, including and excluding rebate savings. State and local WIC agencies appear to be relying more heavily than they did in the past on federal grant funds to cover the costs for NSA. Based on our survey of state and local WIC agencies in fiscal year 1998, about $57 million for NSA was received from sources other than the federal government. Most of these additional funds, $38 million, were given to 11 state WIC agencies by their state governments. Local governments provided most of the remaining funds to local WIC agencies. While no good historical data exist on the level of funding state and local governments have provided specifically for NSA, USDA officials have found that the number of states providing funds to the WIC program for nutrition services has declined. In addition, those states that do provide funds have reduced the amount they contribute. For example, in fiscal year 1992, 18 states made about $91 million in appropriated funds available for WIC, while in 2001, 13 states made about $45 million available. Some state and local agencies have sought additional funding for nutrition services by accessing other sources of funding. California WIC, for instance, has initiated the “WIC Plus” program to assist local agencies interested in obtaining additional funding from other sources, such as reimbursements for nutrition services provided for WIC participants enrolled in Medicaid. The New York WIC program is currently formalizing an agreement with the state’s TANF program to obtain funding for providing additional nutrition services for WIC participants enrolled in TANF. However, the extent to which WIC agencies rely on other types of contributions has diminished. Historically, WIC agencies have made use of a variety of nonprogram resources, typically in-kind contributions such as donated space, to cover some of the costs of WIC’s nutrition services and program administration. But, according to the California WIC director, the time and resources needed to apply for and administer additional funding, such as foundation grants, can prevent WIC agencies from seeking additional funding. A 1988 USDA study found that at 16 local agencies, the share of costs covered by such nonprogram resources was substantial—54 cents for every program dollar. However, our recent work at six agencies found the share of costs covered by such resources to be much lower, ranging from 2 cents to 20 cents for every program dollar. According to state and local WIC officials, responding to the increased demands placed on the program using existing resources has required actions, such as changes in service delivery and cost cutting, that may lower the quality of WIC services. Almost 40 percent of the local agencies responding to our survey reported that additional federal requirements have resulted in a decrease in the average amount of time spent providing nutrition services. State and local officials repeatedly raised the concern that the additional demands cut into the limited time available to provide nutrition education and counseling. According to one program expert, even the infant formula rebate requirement can cut into nutrition education because staff must take time to explain how the rebate works and what products are eligible. According to the executive director of the National Association of WIC Directors, balancing increased program demands and available resources has forced some WIC agencies to cut costs by not increasing office space, personnel, and information technology in response to increasing needs. The 1998 USDA survey suggests that the negative consequences of such cost cutting may be extensive. According to that study, 22 percent of local agencies, serving almost 25 percent of all WIC participants, reported having inadequate office space. Additionally, 30 percent of local agencies serving about 41 percent of all WIC participants reported having insufficient numbers of professional staff. Finally, as reported earlier, 56 percent of state WIC agency automated management information systems were not capable of performing, or efficiently performing, 1 or more of 19 essential program tasks. We identified 16 approaches that could be considered to address 1 or more of the 6 major challenges facing the program. The approaches were identified based on the following assumptions: (1) WIC will continue to be administered by USDA, (2) income eligibility requirements will remain relatively unchanged, and (3) the program will continue to operate as a discretionary grant program. Each addresses a specific aspect of one or more of the six major challenges facing the program. For example, four of the approaches focus on funding; four relate to performance or impact measurement; three address staffing issues; three relate to information technology; and two relate to the provision of nutrition services. Most of the approaches also address other problems, even if tangentially. Table 2 shows the challenges we think each approach can help address. While each of the approaches offer certain advantages, they also have potential negative consequences that policymakers should consider. During our work, we encountered other potential approaches in addition to the 16 we selected; however, we focused on those that most directly addressed the major challenges we identified. Our assumptions precluded some approaches, such as moving the administration of WIC from USDA to HHS, changing the program’s income eligibility requirements to target lower income individuals, and making WIC an entitlement program. Such approaches may warrant further study. A more detailed description of the approaches—including potential implementation strategies, a description of the rationale for considering each approach, and possible advantages and disadvantages—is provided in appendix III. The WIC program is facing serious challenges in its efforts to deliver high- quality nutrition services. Changes in WIC’s service environment and additional requirements are causing the program to strain to provide effective nutrition services. Program stress will likely increase in the future because the program is considered a major point of access to health services for low-income infants and preschool children, creating the expectation that the program can do even more to help address emerging health issues in this population. In 2002, the Congress, through the reauthorization process, will begin to make decisions that could fundamentally affect the program’s ability to meet the challenges it faces in the delivery of nutrition services. In essence, the Congress will be reexamining its expectations for the program and the resources needed to meet those expectations. In describing the major challenges facing the program and approaches that could help to address the challenges, this report provides a structure for carrying out that reexamination. Most of the approaches could involve basic changes in program structure or the way nutrition services are funded. Decisions to adopt such approaches–whether in part or in whole– ultimately rest with the Congress. However, in regard to two of the approaches–recruiting and keeping a skilled staff and assessing the effects of nutrition services – the Congress lacks some information that would benefit decisionmaking. In order to help the Congress and USDA identify strategies to address the program’s challenges in recruiting and retaining a skilled staff and assessing the effects of nutrition services, we recommend that the Secretary of Agriculture direct the Administrator of the Food and Nutrition Service to take the following actions: Work with Economic Research Service and the National Association of WIC Directors to conduct an assessment of the staffing needs of state and local WIC agencies. This assessment should examine factors such as staffing patterns, vacancies, salaries, benefits, duties, turnover, and retention. Work with the Economic Research Service, the National Association of WIC Directors, and other stakeholders, including the CDC, to develop a strategic plan to evaluate the impacts of specific WIC nutrition services. This plan should include information on the types of research that could be done to evaluate the impacts of specific nutrition services as well as the data and the financial resources that would be needed to conduct such research. We provided a draft of this report to USDA’s Food and Nutrition Service for review and comment. We met with Food and Nutrition Service officials, including the Acting Administrator. The agency officials generally agreed with the report’s findings and recommendations. The officials also provided some technical changes and clarifications to the report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; interested Members of the Congress; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact me or Thomas E. Slomba at (202) 512-7215. Key contributors to this report are listed in appendix IV. American Dietetic Association c/o Arizona Department of Health Services Phoenix, AZ American Enterprise Institute Washington, D.C. WIC/Supplemental Nutrition Branch California Department of Health Services Sacramento, CA Center on Budget and Policy Priorities Washington, D.C. Department of Health and Human Services City of Long Beach Long Beach, CA Food Research and Action Committee Washington, D.C. Food Marketing Institute Washington, D.C. Gallatin City-County Health Department Bozeman, MT National Advisory Council on Maternal, Infant & Fetal Health c/o United Health Centers of San Joaquin Valley, Inc. Parlier, CA National Association of WIC Directors Washington, D.C. Maternal Child Health Grady Health System Atlanta, GA Minnesota Department of Health St. Paul, MN Montana Department of Public Health and Human Services Helena, MT Pennsylvania Department of Health Harrisburg, PA Zuni WIC Program Pueblo of Zuni, N.M. Economic Research Service U.S. Department of Agriculture Alexandria, VA Food and Nutrition Service U.S. Department of Agriculture Alexandria, VA Office of Budget and Policy Analysis U.S. Department of Agriculture Washington, D.C. Administration for Children and Families U.S. Department of Health and Human Services Washington, D.C. This appendix describes the 19 essential program tasks, identified by USDA, that a WIC automated management information system should be able to perform in order for program operations to be efficient. 1. Make WIC services more accessible to applicants and participants by increasing the variety of service providers. This could be accomplished by the following: Change legislation to allow the states to use demonstration projects to test and evaluate the use of for-profit entities, such as health maintenance organizations, as local WIC agencies. Encourage or require state agencies to give a greater preference (consideration) to local agency applicants that provide a greater proportion of services (1) during evening or weekend hours, (2) at more convenient locations, and (3) in the native language of applicants or participants. Rationale. WIC was designed to serve poor and low-income women and children as an adjunct to good health care; therefore, it should be highly accessible to this population. Service delivery by WIC agencies has become more difficult due to changing health and social services delivery systems and changing characteristics of the population served by the WIC program. By having greater variety of providers and service locations, applicants or participants may have greater access to WIC services. Potential advantages of this approach include the following: Participation among working families and students may increase. At-risk individuals who do not have access to traditional clinics may be reached. Partnerships with other community organizations may be formed, reducing the funding required to support multiple locations. Additional providers may create a more competitive market for WIC services, improving customer service. The local WIC program may receive added exposure in the community, improving its ability to attract potential participants. Potential disadvantages of this approach include the following: Authorized grocery store vendors that are allowed to provide space could compromise the independence of the state and local agencies in their vendor management roles and create the appearance of a conflict of interest. The integration of WIC with health services may be more difficult if WIC is operated at alternative locations, such as grocery stores. Inconsistent and inaccurate information may be provided at alternative locations, resulting in a lack of program continuity and standardization. Staff members who are bilingual or willing to work evening and weekend hours or in low-income neighborhoods due to safety concerns are difficult to find. Few new agencies are applying to be WIC providers. WIC applicants, participants, staff and others may get confused about service delivery if multiple WIC providers exist without defined service boundaries. 2. Improve WIC’s ability to respond to emerging health issues, such as obesity and diabetes, and to participants’ nutritional needs by expanding the range and scope of nutrition education. This could be accomplished by the following: Expand nutrition education and breastfeeding promotion curricula to include such topics as the benefits of physical activity and influence of media advertising on the food preferences of parents and children. Place greater emphasis during educational sessions on participants’ eating, feeding, and shopping practices or behaviors. Increase the use of multiple strategies when counseling participants. Provide more age-appropriate nutrition education to preschool-age WIC participants. Rationale. Over the past decade, the incidence of obesity and diabetes among adults and children has reached epidemic proportions, especially among lower income individuals. The nutrition education and breastfeeding promotion sessions provide an opportunity for WIC staff to help participants prevent these diseases. However, we observed that the quality of the nutrition education to WIC participants varied significantly. Experts indicate that nutrition counseling that addresses eating behaviors and/or that uses variety of teaching strategies can be more effective in preventing obesity and other nutrition-related illnesses. Potential advantages of this approach include the following: Disease prevention may be less costly than treatment. Increased participant interest in nutrition classes may result in increased knowledge and application to daily life, leading to better health. Training professional staff to provide information on emerging health issues may improve image of WIC staff. Impressionable preschool children may be taught positive messages that can shape lifelong nutrition and health choices and help them influence parents and caregivers. Job satisfaction for registered dietitians, able to utilize more advanced skills, may improve. Potential disadvantages of this approach include the following: Suggested strategies may require longer WIC appointments and participants may be too tired, busy, or stressed to take advantage of the education. Too little research exists to determine most effective strategies. Staff members lack expertise and training on various topics outside of basic nutrition. Better nutrition education and breastfeeding promotion will require additional staffing and resources at the local agency level. Parents may be inconvenienced by making pre-school children available for education because, with more parents working, children are infrequently at WIC sites. 3. Assess the staffing needs of the state and local WIC agencies and develop strategies to address any shortcomings. This could be accomplished by the following: Conduct a national study to examine staff distribution, duties, recruitment, retention, and job satisfaction. USDA working with its partners—such as state WIC agencies, HHS, and NAWD—to develop and implement agreed upon strategies. Rationale. Relatively little national data are available on the size and composition of WIC staff. However, indications from USDA surveys suggest that local WIC agencies are having difficulty recruiting and retaining professional staff. Because of the lack of national data, little is known about the exact nature of the staffing problems. Potential advantages of this approach include the following: The opportunity may be created to define completely what tasks WIC should be undertaking at the various staffing levels, the level of effort needed, and the appropriate distribution of duties among various types of staff. Information may provide an objective basis for funding requests. The image associated with working for the WIC program among nutrition professionals may be improved, along with staff retention. The quality of nutrition services may be improved. Potential disadvantages of this approach include the following: National data may not take into account the variations in state and local agency regulations or local job markets and may be difficult to interpret for local agencies. Limiting the study to current staffing and duties, without first defining the tasks that WIC must complete to achieve the results the program is intended to achieve, would not be as valuable to improving services. Additional resources are needed to assess and address staffing needs. Some factors affecting staffing are independent of USDA. 4. Establish more stringent professional staffing requirements for local WIC agencies. This could be accomplished by the following: Develop an ideal “staffing plan” based on the number of participants per agency. Such a plan would identify the types of duties performed by professional, paraprofessional and support staff to make the most effective and efficient use of available resources. Establish standards for staff-to-participant ratios, including the number of dietitians, nutritionists, or lactation specialists an agency should employ, or have access to, based on its number of participants. Rationale. No requirement exists that local WIC agencies employ a dietitian, nutritionist, or lactation specialist or that their staff members have access to the services of these professionals. We observed that the availability of nutrition professionals who had sufficient time to provide individual counseling varied from agency to agency, resulting in a range of the quality of services provided. Without staffing requirements to ensure a minimum level of access to professional nutrition services, local agencies may not be able to provide adequate services, especially to high-risk participants. Potential advantages of this approach include the following: Proper staffing may increase participant satisfaction. Quality of services may be improved. Job satisfaction may be increased by clearly describing responsibilities for various staff members. The program may be better able to respond to emerging health issues. Funds needed to provide high-quality services may be more easily estimated. Potential disadvantages of this approach include the following: NSA funding may need to be increased. Research is needed to determine what constitutes an “ideal staffing plan” and the tasks required by each occupation. The availability of professional staff may be limited in some areas. Staffing ratio needs to be based on the nutritional status of participants, rather than the number of participants. Legislative changes to the program may be needed. If standards focus on professionals, the role of paraprofessionals may be diminished. 5. Establish minimum continuing education requirements for WIC staff in the areas of nutrition, breastfeeding promotion, and counseling. This could be accomplished by the following: Develop national training requirements for WIC service providers, both professional and support staff, with input from WIC-related professional associations and appropriate federal agencies, such as CDC. Require states to establish continuing education requirements for their WIC agencies. Rationale. Currently, WIC staff are not required to continue their education, despite the fact that knowledge in the health and nutrition fields has evolved. Recent nutrition research has provided new information on diets to prevent illness, on innovations in nutrition counseling, and on new nutrition-related health concerns, such as the epidemic rise in obesity. Requiring all WIC staff to receive continuing education, even those not required to meet professional certification and licensing requirements, could improve the quality of WIC services and enhance the professionalism of WIC staff. Potential advantages of this approach include the following: The qualifications of WIC staff may improve. Staff retention and job satisfaction may be increased. The quality of nutrition services may be improved and the amount of misinformation provided to WIC participants may be decreased. Training could be more focused on program needs, not just on Potential disadvantages of this approach include the following: Additional NSA resources are needed to implement training and continuing education requirements. It is unlikely that a universal plan could be devised to fit the wide range of availability of staff, costs and client needs at the local agencies. Training requirements may discourage employment in WIC if time and expense is to be assumed by employees. Reporting requirements may be increased at the state and local agencies to ensure compliance. 6. Expedite the implementation components of WIC’s 5-Year Technology Plan related to the development of a model management information system and the facilitation of multistate acquisitions of management information systems. This could be accomplished by the following: USDA could prepare a report for the Congress in the next 2 years that outlines the features of a model system, the legislative and regulatory changes required to facilitate multistate acquisitions, and the associated funding needs. Rationale. USDA has identified 19 essential program tasks that WIC management information systems should be able to perform, such as participant certification, benefit delivery, vendor management, and funds management. Some of these tasks are currently beyond the capability of over half of the state agencies. USDA has also noted that about 60 percent of state systems have exceeded or will exceed their life cycles within 3 years. A model management information system and the facilitation, through state partnerships, of the acquisition of management information systems have the potential to accelerate the upgrade of state systems and promote greater standardization of needed program data. Potential advantages of this approach include the following: The multistate purchase of equipment and services for new systems and/or upgrades may reduce administrative burdens for individual states, lower costs and save time, and accelerate the acquisition of system enhancements for some states. Greater consistency and standardization may occur in WIC assessments and service delivery. Program participation in CDC’s pediatric and pregnancy nutrition surveillance systems may be improved. Program fraud may be decreased nationwide. Collaborative, nationwide technical standards may be created that could facilitate program communications, including the transfer and sharing of data. Potential disadvantages of this approach include the following: State legislative and regulatory barriers may discourage multistate purchases of equipment and services. Sources of additional funds needed for development of standards and for implementation of the systems are uncertain. A system that has the flexibility to accommodate a wide range of state- specific requirements and applications will be difficult and expensive to create. USDA may not have the technical expertise necessary to develop a model management information system. Very often when model systems are developed, by the time they are completed, technology and program requirements have evolved sufficiently to render the model less useful than anticipated. 7. Ensure that all local WIC agencies have direct Internet access. This could be accomplished by the following: Set a target date for state WIC agencies to ensure that all local agencies have direct access to the Internet. Rationale. The Internet can be used by federal, state, and local agencies for a variety of purposes related to the WIC program. USDA uses the Internet to provide state and local WIC agencies with program information, such as eligibility guidelines, application instructions, program funding, participation rates, and current law and regulations. Yet, available information indicates that hundreds of local agencies lack direct Internet access. The lack of Internet access may be due to several factors, such as the availability of telephone lines and local Internet providers. The quality of WIC services could be improved by enabling all local WIC professionals to efficiently communicate directly with USDA, other WIC agencies, and nutrition or health experts via the Internet. Potential advantages of this approach include the following: Local agency websites for communicating program access information may increase WIC participation. Nutrition education materials may be made more accessible. WIC staff may be given the option of distance learning and self-paced training opportunities. Nutrition, health, professional, and other information may be made more accessible, especially to remote locations. Staff effectiveness may greatly improve. Communication and reporting between federal, state, and local agencies may be facilitated. The Internet may help WIC staff to locate potential sources of financial support. Potential disadvantages of this approach include the following: Added expense of hardware/software and Internet service may not be covered by state funding requiring the use of limited nutrition services and administrative funds. Internet expense may not be justified by its impact on program operations. Potential exists for abuse by WIC staff. Computer systems and participant records may be vulnerable to viruses or hackers. 8. Implement nationwide electronic benefit transfers for WIC food benefits. This could be accomplished by the following: Set a target date for implementation of EBT systems. Test and evaluate a variety of EBT systems—such as smart card, magnetic strip, and Web-based technologies. Develop key infrastructure elements, such as a database of WIC- specific universal product codes, to support the implementation of EBT systems. Rationale. WIC participants typically receive paper vouchers or checks to purchase specific foods prescribed by WIC staff. The grocery industry reports that transactions involving these vouchers or checks incur comparatively high costs. USDA and the WIC retail community have established goals to reduce the transaction costs for grocers and improve the buying experience for WIC participants. An EBT system has the potential to help WIC meet these goals, but the infrastructure is not yet in place to support these systems. Potential advantages of this approach include the following: The timeliness and accuracy of financial transactions may be increased. Program fraud and abuse may be minimized. Paper use associated with voucher printing, storage, collection, and destruction may be reduced. Stigma associated with the paper transaction process may be diminished. Interstate transfer of participant certification may be facilitated. Opportunities to integrate the delivery of WIC and other services may be expanded. Lost or stolen EBT cards are more easily replaced. Food items may be more easily purchased as needed. Ability to monitor and collect information on products purchased may be increased. Potential disadvantages of this approach include the following: Development and operational costs of EBT, particularly for small food retailers, could present a financial hardship that may decrease the number of stores that wish to participate in WIC. Mandating an implementation date for EBT does not suddenly imbue WIC clinics and state agencies with the interest and the technical understanding necessary to implement EBT. EBT infrastructure at the retail level, especially in rural areas, is not available to meet program needs. No commercial model of EBT exists. Development and timely updating of a national system of specific WIC- approved food product codes necessary for the operation of an EBT system could be difficult, especially for states that use a ‘lowest price’ policy where products allowed by WIC can change from store to store or from day to day. 9. Develop and track national outcome measures for nutrition services and program coordination and integration. This could be accomplished by the following: USDA working with its partners—such as state WIC agencies, HHS, and NAWD—to develop outcome measures. Draw outcome measures from CDC’s pediatric and pregnancy surveillance systems (see approach #11). Drawing outcome measures from HHS’ Healthy People 2010 objectives. Track the measures at the state and national levels. Report annual progress of achieving goals in a manner similar to that in the Web-based Maternal and Child Health Program information system. Rationale. In response to the Government Performance and Results Act of 1993, USDA has attempted to develop national outcome measures for some of WIC’s nutrition services. However, it has had very limited success establishing these measures because of resource constraints and difficulty identifying data. Moreover, USDA relies on the state and local agencies, as grant and subgrant recipients, to provide the services to help accomplish the program’s goals and objectives. USDA currently requires state agencies to annually describe their goals and objectives for improving program operations, but it does not require that the state goals be consistent with any of the national goals or objectives. Developing some outcome measures that assess the coordination and integration of WIC services with other health or social service providers would highlight the federal- level objective to provide more consistent care to participants and reduce duplicative activities. Potential advantages of this approach include the following: Data and information would be more available for future studies. Using the HHS Healthy People 2010 objectives is an excellent way to achieve consistency with coordinating agencies and programs. If WIC caseworkers focused on key objectives, clearer progress could be made, which would help the program justify funding from the Congress and state legislatures. Successful outcomes may lead to the identification and implementation of best practices. Accountability of state and local agencies may be increased, reducing the need for state and local site visits and monitoring. Potential disadvantages of this approach include the following: The CDC’s surveillance systems have significant limitations, including voluntary participation. Some jurisdictions might feel pressured to drop local priorities for national ones if outcome measures were defined the same for all jurisdictions. Different states, regions, and counties use different computer systems and coding schemes to record WIC data, making it difficult to compile data nationally or even statewide. Outcomes measured may be partially attributable to other programs or services, not just to WIC services. Focus on a limited set of outcomes may prompt programs to address outcomes that are easily measurable to the exclusion of others. 10. Require each state WIC agency to develop measurable goals that address state-specific issues and track progress toward meeting these goals. This could be accomplished by the following: USDA and state agencies work as partners to develop state level measurable goals. Goals should be based on state health issues identified with CDC’s pregnancy and pediatric surveillance systems and other systems. Goals should relate to quality of services—such as participant retention (particularly for children) and referral outcomes—in a way that can be quantified. Provide training or technical assistance to state agency staff in developing goals and objectives under the Government Performance and Results Act. Enhance state and local management information systems to support tracking goals (see approach #6). Rationale. While USDA currently requires state agencies to describe their goals for improving program operations on an annual basis, the agency does not require that the goals be measurable. As previously described, about half of the state goals and objectives that we reviewed lacked key elements, such as baseline or target values, needed to measure progress. Using more measurable goals would enable WIC to demonstrate progress at the state level. Potential advantages of this approach include the following: A focus on these measurable goals and objectives would help clinic staff nationwide focus on the common purpose of WIC without requiring agencies to employ the same strategies. Measurable goals may lead to more focused, meaningful state WIC plans. State and local agencies may be encouraged to focus on outcome goals rather than caseload. The ability to demonstrate and measure program effectiveness may support funding requests. Potential disadvantages of this approach include the following: This approach does not take into account the differences in state operations and, more importantly, the differences in the type and degree of action required to improve program effectiveness for different states or regions. Data may not be available or reliable for identifying baselines or appropriate targets, or for monitoring progress. State agencies will require training to develop measurable goals. Attainment of some goals may also be dependent on other health programs. 11. Collect more data relating to WIC participants and program interventions by expanding the CDC pediatric and pregnancy nutrition surveillance systems. This could be accomplished by the following: USDA works with its partners—such as HHS, state WIC agencies, and NAWD—to find ways for WIC to obtain more information from the pediatric and pregnancy nutrition surveillance systems. Increase the number of states and federal programs participating in pediatric and pregnancy nutrition surveillance systems. Increase the number of variables collected by the pediatric and pregnancy nutrition surveillance systems, to include data such as type of WIC nutrition interventions received and household socioeconomic status. Rationale. CDC’s pediatric and pregnancy nutrition surveillance systems track the health status of children and the risk factors of mothers who participate in selected federal programs. While data for WIC participants represent a substantial portion of the sample, not every state WIC agency participates. Moreover, the systems do not track individuals over time or collect information on the types of services that individual participants receive. Expanding the data collection associated with these systems would enable WIC to better track program performance and provide critical data needed to evaluate the effectiveness of WIC services. Potential advantages of this approach include the following: Data collection systems, such as CDC’s pediatric and pregnancy nutrition surveillance systems, may be an effective approach to improving the amount, national representation, and usefulness of data collected. Improved data may help justify funding and help ensure that it is targeted to treatments most likely to yield successes. Enhanced data systems may provide more relevant data for program planning, monitoring, and evaluation. With all states participating, the usefulness of the data collected is increased. Expansion and enhancement of an existing system may be less costly than creating a new system. Potential disadvantages of this approach include the following: Additional resources may be needed for automated systems and staff training to enable some states to participate in CDC’s pediatric and pregnancy nutrition surveillance systems. Much of the information in these systems is incomplete and contains many errors, which raises concerns about accuracy. Significant costs are associated with expanding participation in the surveillance systems, as well as increasing the number of variables in the questionnaires. The variety of counseling topics, the sensitivity of health related advice, and privacy concerns make nationwide data collection difficult. 12. Develop a strategic plan to evaluate the impact of WIC’s nutrition services. This could be accomplished by the following: Identify the research needed to determine the effects of WIC’s nutrition service interventions on its participants. Identify necessary data and appropriate research methodologies. Identify resources required to conduct impact research. Rationale. USDA currently spends about $ 1.1 billion annually for NSA. In recent years, USDA has spent about $2 million to $3 million annually on WIC-related research. Yet, few research findings exist on the effectiveness of specific nutrition services. According to USDA officials, the money dedicated to research is insufficient to assess the effect of WIC services on participants, in part because of the need for primary data and the complex nature of the required methodologies. Potential advantages of this approach include the following: Well-designed evaluation/research would make it possible to assess program impact and determine appropriate changes. Studying the effects of different nutrition promotion treatments is essential to helping WIC direct its nutrition promotion efforts to the activities and approaches most likely to yield the best results. The identification of the type of research and the resources needed would help to justify funding support required. Potential disadvantages of this approach include the following: Assessing the effect of specific nutrition education interventions may be difficult. Several obstacles exist to evaluating the impact of WIC’s nutrition services. These include: participants not being required to attend nutrition education, not having clear and well-defined outcomes, and adequate assessment tools not being available for measuring dietary intake and changes in dietary behavior. Research is difficult, time-consuming, and costly to conduct. Representative samples are difficult to gather from the different types of WIC agencies throughout the United States. Implementing a strategic plan to evaluate the impact of WIC’s nutrition services would require a reliable, significant ongoing commitment of funding and staff resources. 13. Provide states with greater flexibility to convert food funds into NSA funding. This could be accomplished by the following: Change legislation to permit states to (1) carry converted funds forward into subsequent years, (2) continually convert food funds resulting from program savings into NSA funding for the purposes of serving more participants, and/or (3) target some food funds to support high-cost nutrition service activities, such as home or hospital breastfeeding support. Rationale. Current program regulations allow states to convert food funds to NSA funds to cover only current year expenditures that exceed their NSA grants under two conditions: (1) A state has an approved plan for food cost containment and for increases in participation levels above the USDA-projected level and (2) a state’s participation actually increases above the level projected by USDA. However, the increased participation supported by the converted funds is not considered in the allocation for the next year. Officials from several state WIC programs and NAWD have indicated that the current conversion policies do not provide any incentives for states to aggressively pursue food cost containment strategies for the purposes of increasing participation. In recognition of the high costs associated with delivering nutrition services to some participants, recent legislation, P.L. 106-224, permits a state-level agency serving remote Indian or Native American villages to convert food funds to NSA funds to cover allowable costs, without having an increase in participation. Potential advantages of this approach include the following: Flexibility may serve as an incentive or reward for containing food costs. For example, states may be more aggressive in using strategies to reduce food costs, including educating participants to be better shoppers, if they knew some of the money saved could be converted to NSA to improve nutrition services. States may have more control over their program budget. Barriers that states claim prevent them from using current conversion authority would be removed. Fund conversion for targeted purposes such as nutrition education, breastfeeding promotion, and or outreach may increase participation. Potential disadvantages of this approach include the following: Increased conversion could limit the number of participants served by the program during times of growing caseloads and limited food funds. The quality of food packages provided to participants may suffer, which may also reduce participation. The portion of federal funds spent on NSA, viewed by some as an “administrative expense,” may be decreased, misrepresenting the funding requirements of the program. Unless an evaluation requirement is created, the effects of providing increased conversion authority would be unknown. Carrying forward converted funds into subsequent years could result in a significant portion of funds remaining unused and rolled forward from year to year. 14. Increase the level of federal funding for WIC NSA. This could be accomplished by the following: Appropriate additional funds that increase the average grant per participant. Provide additional funds that target specific needs, such as the acquisition of management information systems. Rationale. The federal grant level for NSA is based on the national average of NSA grant expenditures that were made per participant per month in 1987, adjusted for inflation. In fiscal year 2001, grant levels were based on a national average of $12.27 per participant per month. Since the grant level was established, new demands have been placed on the program in part because of new program requirements, shifting demographics, emerging health needs, and changes in the health care and social service environment. In addition, our case studies suggest a decrease in the extent to which nonprogram resources, such as in-kind contributions, are covering nutrition service and administration costs. Potential advantages of this approach include the following: The program may be better able to meet its responsibility as an adjunct to other health care services, including immunizations. The program may be able to fully implement interventions that have been demonstrated to improve immunizations among children enrolled in WIC. The program may be able to implement approaches to address challenges it faces that have been identified above. The recruiting and retention of staff may be improved by offering higher salaries and better benefits. Additional funds targeted for management information systems may help to improve the efficiency of client services and program management. Additional funds targeted for EBT may improve program integrity and streamline financial transactions and reporting. The program may be better able to adjust to changes in the characteristics of the population it serves and the environment in which it operates. The program may be better able to carry out additional responsibilities placed on it since 1987. Potential disadvantages of this approach include the following: No guarantee exists that additional resources would improve outcomes. Additional funds for NSA would be perceived as reducing resources available to provide food benefits to potential participants. More federal funds could reduce the likelihood of state financial support of the program. Additional resources may be difficult to justify without specific information about how much it costs to provide essential services and/or the cost–effectiveness of nutrition services. 15. Increase overall state contributions to WIC NSA. This could be accomplished by the following: Change WIC funding guidelines to require or encourage a state match, either monetary or in-kind, of some portion of WIC NSA funds. Ask states to provide a match for special purpose grants, such as continuing education for WIC staff. Rationale. State agencies rely almost entirely on their federal grants to cover their WIC NSA costs. No state matching requirement exists for WIC—although some states volunteer support for WIC. In responding to our 1999 survey of state-level WIC agencies, 11 state-level agencies reported receiving state funds for WIC in fiscal year 1998. The state contributions ranged from less than 1 percent to just over 37 percent of their total NSA funds. Increasing the level of state contributions for WIC could help to enhance the quality of nutrition services. Potential advantages of this approach include the following: More resources may enhance WIC services; for example, more funding would enable hiring more staff so more time could be spent on nutrition education with participants. An increase in state funds may increase program flexibility. For example, federal restrictions may not apply. State support and commitment to the program may be demonstrated with an increase in state funds. States may have a greater incentive to be efficient. Additional funding sources would strengthen partnerships and program services. Potential disadvantages of this approach include the following: Federal funding may decline. States may divert funds from other public health programs. Some states may turn down federal funding, resulting in fewer resources available for WIC services. Some states, including those with a disproportionate portion of low- income population, may not be able to afford a match. Tension may be created between federal and state goals for the program. 16. Increase the level of WIC funding from other sources. This could be accomplished by the following: Help state and local agencies in the area of resource development. Provide incentives or funding to support state and local fundraising efforts. Generate program-related income, such as from fees for nutrition education or breastfeeding support to noneligible individuals or processing vendor applications. Rationale. State and local agencies use funding from other sources to enhance WIC services. California WIC has initiated a “WIC Plus” program to identify and obtain other sources of funds for the purpose of enhancing nutrition services. Also, the New York State WIC program is currently formalizing an agreement with the state’s TANF program; under this agreement, the TANF program would provide funds to WIC for additional nutrition services to TANF program participants who are also enrolled in WIC. However, based on our survey of local agencies, about 5 percent of the funds received in fiscal year 1998 came from other sources. Obtaining additional funding from other sources may help improve the quality of WIC services. Potential advantages of this approach include the following: Collaboration with other programs, such as TANF and Medicaid, may be increased if other programs paid WIC to provide services to their participants. Services may be enhanced and management information systems improved. Income from charging fees to non-WIC participants for some services may enhance the image of WIC and improve the quality of services offered. Potential disadvantages of this approach include the following: Not all WIC agencies are able or willing to pursue additional funding. Staff time and resources are needed to administer income-generating efforts. Income could vary from year to year resulting in the variation of program services. In addition to those named above, Peter M. Bramble, Jr.; Corinna A. Nicolaou; Lynn M. Musser; Carolyn M. Boyce; Judy K. Hoovler; Clifford J. Diehl; and Torey B. Silloway made key contributions to this report.
The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) serves almost half of all infants and about one-quarter of all children between one and four years of age in the United States. The WIC program faces the following challenges: (1) coordinating its nutrition services with health and welfare programs undergoing considerable change, (2) responding to health and demographic changes in the low-income population, (3) recruiting and keeping a skilled staff, (4) improving the use of information technology to enhance service delivery and program management, (5) assessing the effect of nutrition services, and (6) meeting increased program requirements without a corresponding increase in funding. This report identifies 16 approaches to address these challenges. Each of the approaches has advantages and disadvantages that policymakers should consider.
Since the Securities Act of 1933 and the Securities Exchange Act of 1934 established the principle of full disclosure—requiring public companies to provide full and accurate information to the investing public—public accounting firms have played a critical role in companies’ financial reporting and disclosure. While officers and directors of a public company are responsible for the preparation and content of financial statements that fully and accurately reflect the company’s financial condition and the results of its operations, public accounting firms, which function as independent external auditors are expected to provide an additional safeguard. The external auditor is responsible for auditing companies’ financial statements in accordance with generally accepted auditing standards (GAAS) to provide reasonable assurance that a company’s financial statements are fairly presented in all material respects in accordance with generally accepted accounting principles (GAAP). Public accounting firms offer a broad range of services to their clients. In addition to traditional audit and attest and tax services, firms also offer consulting services in areas such as information technology. Although all of the Big 4 firms continue to offer certain consulting services, three of the Big 4 have sold or divested portions of their consulting businesses.Following the implementation of Sarbanes-Oxley, SEC issued new independence rules in March 2003, which place additional limitations on management consulting and other nonaudit services that firms could provide to their audit clients. Sarbanes-Oxley also requires auditors to report to and be overseen by a public company’s audit committee, which consists of members of the company’s board of directors who are required to be independent. The external auditor also interacts closely with the company’s senior management, including the chief financial officer. Most of the survey respondents said they were satisfied with their current auditor. Moreover, half of the respondents reported that they have had the same auditor of record for 10 or more years. Respondents gave various reasons for changing auditors, including concerns about their auditor’s reputation and fees. They also told us what factors would drive their decision in choosing a new auditor. Almost all respondents said that they used their auditor of record for more than audit and attest functions, including tax-related services and assistance with company debt and equity offerings. Overall, 80 percent (127 out of 158 respondents answering this question) of the respondents said they were “very” or “somewhat” satisfied with their current auditor of record, while 12 percent (19 of 158) said that they were very or somewhat dissatisfied, and 8 percent (12 of 158) said they were neither satisfied nor dissatisfied. Similarly, of the 135 respondents that provided the year they first employed their auditor of record, half of them said they had retained their auditor of record for 10 years or more. The average tenure was 19 years, ranging from less than 1 year to 94 years. When the 37 public companies that switched from Andersen because of Andersen’s dissolution were excluded, the average tenure increased to 25 years, and the percentage of public companies that had retained their auditor for 10 years or more increased to 68 percent. Figure 1 shows the length of the relationship these respondents had with their current auditor. We found that there was an association between the length of the company- auditor relationship and satisfaction. That is, the longer the relationship between a company and its auditor, the more likely that the company was satisfied with its auditor of record. As figure 2 shows, 94 percent (30 of 32) of companies with auditor tenure of more than 30 years were very or somewhat satisfied with their auditor, whereas 70 percent (28 of 40) of companies using their current auditor for 1 year or less said they were very or somewhat satisfied with their auditor. Sixty-one of the respondents reported that they switched auditors since 1987. Of those 61, 37 were former Andersen clients that switched within the last 2 years as a result of Andersen’s dissolution, five were former Andersen clients that switched over 2 years ago for reasons other than Andersen’s dissolution, and 19 were other respondents that switched from another Big 4 or non-Big firm since 1987, as shown in table 1. The respondents who were clients of Andersen and had to change auditors within the last 2 years as a result of Andersen’s dissolution were somewhat less satisfied with their current auditor than a separate group of 19 respondents that had switched from another Big 4 or non-Big 4 firm since 1987. Of the 37 former Andersen clients, 25 respondents indicated that they were satisfied with their current auditor of record, seven said that they were dissatisfied with their current auditor, and five said they were neither satisfied nor dissatisfied. Of the 19 other respondents that switched from other firms since 1987, proportionally more (16 respondents) said they were satisfied with their current auditor of record, while only one was somewhat dissatisfied and two were neither satisfied nor dissatisfied. While this suggests that clients leaving Andersen because of its dissolution are less satisfied with their current audit arrangements than other firms that had changed auditors in the past, it is important to note that the 37 respondents who were former Andersen clients also had the shortest tenures with their current auditors, which may in part explain their lower satisfaction. Respondents gave a variety of reasons for switching, including concerns about the reputation of their auditor, the need to retain an auditor that could meet companies’ new demands, concerns about the level of fees charged for audit and attest services, and increased demands resulting from a corporate merger or change in company ownership. Four respondents said their relationship with their former auditor was no longer working, and another respondent cited a disagreement over an accounting policy that resulted in the switch. While none of the respondents said their company had a mandatory rotation policy, two respondents said their companies switched auditors to obtain a “fresh perspective” and “as a form of good governance.” When we asked the respondents what factors would drive their decision if they had to choose a new auditor, they most often cited “quality of services offered” as a factor of “very great” or “great” importance (99 percent or 157 of 159). The second most highly rated factor was “reputation or name recognition of the auditor” (83 percent or 132 of 159), followed by “industry specialization or expertise” (81 percent or 128 of 159). Ninety-four percent (149 of 159) of respondents obtained other services from their auditors in addition to audit and attest services. We asked respondents if their auditor provided any of the three following categories of services: tax-related, assistance with company debt and equity offerings, and “other services.” Only 10 companies, or 6 percent, reported that their auditor of record provided them with only audit and attest services. Respondents for the remaining 149 companies said they used their auditor of record for one or a combination of other services. Specifically, 87 percent (130 of 149) said their auditor provided tax-related services, such as tax preparation and 71 percent (106 of 149) said they received assistance with company debt and equity offerings. Thirty-seven percent (55 of 149) said they received other services, such as merger and acquisition due diligence, internal control reviews, or tax planning assistance. Respondents had differing views about the impact of past consolidation among the largest accounting firms on audit fees, but most agreed that it had little or no influence on audit quality or auditor independence. While 93 percent (147 of 158) of respondents said that their audit fees increased over the past decade, they were almost evenly divided about whether past consolidation of the largest accounting firms had a “moderate upward” or “great upward” influence (47 percent or 75 of 158) or little or no influence (46 percent or 72 of 158). See figure 4. More respondents said that audit quality had increased over the past decade rather than decreased, but the majority of them did not believe that past consolidation of the largest accounting firms influenced these changes. Specifically, 44 percent (69 of 158) of the respondents said that audit quality had increased, while 18 percent (29 of 158) said quality had decreased and 37 percent (58 of 158) said there had been little or no change. However, 63 percent (100 of 158) of the respondents believed that consolidation of the largest firms had little or no influence on the quality of audit and attest services their companies received (see fig. 5). The respondents provided other reasons for changes in audit quality, including changes in audit partner, new regulations and audit standards, and technical expertise of the audit team. Several respondents cited the importance of the assigned audit partner to overall audit quality. One respondent noted, “The partner in charge is critical .” Another respondent said audit quality improved because of “more personal involvement of the audit partner.” Other respondents believed that changes in audit quality were due to changes in audit methodologies and the Sarbanes-Oxley Act. According to one respondent, “The change in the depth and quality of the audit process is due to a more rigorous regulatory and litigation environment and not to audit firm consolidation.” Another respondent noted, “Following the Sarbanes-Oxley Act and Andersen’s downfall, other firms are increasing the level of work they do and the depth of the audit.” Finally, we received comments about the skills and experience of the audit team. One respondent wrote, “Answers to accounting questions take too long and quality of staff is poor. Fundamental audit practices are gone.” Another respondent similarly commented that the “level of experience seems to have declined, contributing to lower quality, partners supervise more jobs.” However, that same respondent also noted that since his company had changed auditors, the “level of experience has improved.” Finally, 59 percent (94 of 158) of the respondents indicated that their auditor had become more independent over the past decade, while 1 percent (2 of 158) said that their auditor had become less independent and 38 percent (60 of 158) said that there had been no change in their auditor’s independence. However, 72 percent (114 of 158) of the respondents also said that past consolidations of the largest accounting firms had little or no influence on auditor independence (see fig. 6). The remaining views varied, with 16 percent (26 of 158) of respondents believing that the consolidations had a negative influence on auditor independence and 8 percent (12 of 158) saying that it had a positive influence. Some of the respondents commented that audits had been positively affected by SEC’s new independence requirements, while one respondent said that the new rules had not significantly enhanced auditor independence. Respondents raised concerns about the future implications of consolidation, especially about possible limitations on audit firm choice. A significant majority of respondents said that their companies would not use a non-Big 4 accounting firm for audit services, which limited their choices. While most respondents said that they would be able to use another Big 4 firm as their auditor of record if they had to change, they also said that they would prefer more large firms from which to choose. Moreover, they raised concerns that further consolidation among the largest accounting firms would result in too few choices. Yet, despite those concerns, most respondents favored allowing market forces to dictate the level of competition in the market for audit and attest services. Eighty-eight percent (139 of 158) of respondents indicated that they would not consider using a non-Big 4 firm for audit and attest services. As shown in figure 7, nearly all the respondents cited three factors as being of great or very great importance in determining why their companies would not use a non-Big 4 firm: (1) auditor’s technical skills and knowledge of the company’s industry (91 percent or 126 of 138); (2) the reputation of the accounting firm (91 percent or 126 of 138); and (3) the capacity of the firm (90 percent or 125 of 138). These three factors also corresponded closely to the most frequently cited factors in choosing a new auditor as previously noted in figure 3. One respondent noted, “We have operations in 40 countries and want all our auditors to operate with the same systems and procedures. Only a global firm can deal with this complexity in a cost- effective manner and give us the continuity of support for U.S. generally accepted accounting principles and local statutory requirements.” Another respondent noted, “We would want a Big 4 firm because of its global presence and capabilities, reputation, and depth of resources available.” Sixty-five percent (89 of 137) of respondents also cited geographic presence and 60 percent (81 of 134) cited the lack of consent from the company’s board of directors as reasons of great or very great importance. Respondents also provided the following reasons as to why they would not use a non-Big 4 firm: their shareholders would not want a non-Big 4 firm; to gain investor confidence or stock market acceptance; Big 4 firms have financial resources to stand behind their work; public companies are expected to use them; and the quality of services from a Big 4. While 57 (90 of 158) percent of respondents said that the number of firms their companies could use for audit and attest services was adequate as compared with the 43 percent (68 of 158) who said it was not, 86 percent (117 of 136) told us that ideally there should be more than four large accounting firms as viable choices for large national and multinational public companies. In responding to our question on what they thought the optimal number of firms for large companies should be, 74 percent (100 of 136) said they would prefer from five to eight large accounting firms to provide audit and attest services to large national and multinational public companies and 12 percent (17 of 136) of the respondents preferred more than eight firms. Fourteen percent (19 of 136) of the respondents said four or fewer firms would be optimal. Most comments we received in favor of more firms addressed the need to increase competition, decrease fees, and comply with the new independence rules as required by Sarbanes-Oxley. Respondents noted, “More firms will improve the competition in the industry,” “more choices, more competition, lower cost,” and “one firm provides tax planning services which may impair independence.” Another respondent wrote, “Slightly more options would enhance technical resourcing opportunities external to current auditors.” However, we also received many comments cautioning that too great a number of firms might have negative implications. One respondent said, “Any greater number of firms would have difficulty in maintaining scale to properly serve large international companies.” According to another respondent, “If the number gets too big, then hard to have level of expertise in certain industries.” Some respondents felt that four or five big firms would be sufficient. One respondent wrote, “As a firm believer in the efficiency of the marketplace, I believe that the current number of large firms (4) is probably close to the optimum number, but wouldn’t mind seeing another major firm gradually emerge.” Another respondent wrote, “Balance must be struck between competition and fragmentation of a fixed talent pool.” When asked the minimum number of accounting firms necessary to provide audit and attest services to large national and multinational public companies, 82 percent (120 of 147) of respondents indicated that the market was either at its minimum or already below the minimum number required. Fifty-nine percent (86 of 147) said that four or five large accounting firms would be the necessary minimum. According to one respondent, “Four is the absolute minimum, because if you currently use one firm for external audit purposes and another firm for internal audit purposes, that only leaves two other firms from which to choose if you want to change auditors or use a Big 4 firm for consulting services.” Some respondents pointed out that not even all the Big 4 firms have the necessary industry expertise required to conduct their companies’ audits. According to one respondent, “From a realistic standpoint, only one other Big 4 firm has a utility practice that would help understand our industry.” Another respondent wrote, “We use one of the Big 4. Two of them do not have industry expertise. Only one of the remaining three has industry expertise in the geographic region.” Although Sarbanes-Oxley prohibits a company’s external auditor from providing internal audit services and certain other consulting services to the same company, many companies currently use one of the Big 4 as their external auditor and one of the remaining three Big 4 firms for nonaudit services such as tax consulting and internal audits. Therefore, a company with this arrangement that needed to change auditors would have one fewer alternative or would need to terminate its internal audit or consulting relationship. For example, one respondent noted, “Aside from our current auditor, we use another of the Big 4 as a co-source provider of internal audit services, so would not consider them. We are using a third for tax work so it would be hard under Sarbanes-Oxley to switch to them.” Despite the fact that 94 percent of respondents said they had three or fewer options from which to choose if they had to change auditors, 62 percent (98 of 159) of respondents said they would not suggest that any actions be taken to increase competition in the provision of audit and attest services for large national and multinational companies. When asked whether steps should be taken to increase the number of available choices, 79 percent (65 of 83) opposed government action to break up the Big 4, while 66 (55 of 83) percent opposed any government action to assist non-Big 4 firms. Seventy- eight percent (64 of 82) of respondents said they would favor letting market forces operate without government intervention. While some respondents expressed their belief that the market would adjust to create a more competitive environment, others expressed uncertainty about whether government actions could increase competition. According to one respondent, “Government action to assist the non-Big 4 firms will not work. The level of expertise and depth of resources required to deal with ever increasing levels of complexity and regulation cannot be government intervention.” However, another respondent commented, “Having only four large firms is a concern. The benefits of consolidation should be higher quality, less variation in advice, stronger financial resources of the accounting firm, and more accountability. If these benefits are not achieved, then the government may need to intervene.” In addition, several respondents expressed concern about further consolidation. Referring to the dissolution of Andersen, one respondent said, “Our biggest concern is the ease with which a firm can disappear.” Another stated, “The failure of Andersen had a devastating impact and ultimately resulted in fewer qualified professionals providing attest services during a time of rapidly increasing complexity in applying GAAP.” We are sending copies of this report to the Chairman and Ranking Minority Member of the House Committee on Energy and Commerce. We are also sending copies of this report to the Chairman of SEC, the Chairman of the Public Company Accounting Oversight Board, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. This report was prepared under the direction of Orice M. Williams, Assistant Director. Please contact her or me at (202) 512-8678 if you or your staffs have any questions concerning this work. Key contributors are acknowledged in appendix IV. We surveyed a random sample of 250 of the 960 largest publicly-held companies. We defined this population using the 2003 list of the Fortune 1000 companies produced by Fortune, a division of Time, Inc., after removing 40 private companies from this list. We mailed a paper questionnaire to the chief financial officers, or other executives performing a similar role, requesting their views on the services they received from their auditor of record, the effects of past consolidation on competition among accounting firms, and its potential implications. To develop this questionnaire, we consulted with a number of experts at GAO, the American Institute of Certified Public Accountants, and the Securities and Exchange Commission, and pretested a draft questionnaire with six large public companies from a variety of industries. The survey began on May 6, 2003. We removed one company that had gone out of business and received 159 usable responses as of August 11, 2003, from the final sample of 249 companies, for an overall response rate of 64 percent. The number of responses to an individual question may be fewer than 159, depending on how many respondents answered that question. While the survey results are based on a random sample drawn to be representative of the population of publicly held Fortune 1000 companies and thus could be adjusted statistically to represent the whole population, including those not sampled, we are instead reporting totals and percentages only for those companies actually returning questionnaires. We did this because a significant number of sampled companies did not respond, and the answers respondents gave could differ from those nonrespondents might have given had they participated. This kind of potential error from nonresponse, when coupled with the sampling error that results from studying only a fraction of the population, made it particularly risky to project the results of our survey to not only the nonrespondents, but also to the part of the public company population we did not sample. There are other practical difficulties in conducting any survey that may also contribute to errors in survey results. For example, differences in how a question is interpreted or the sources of information available to respondents can introduce unwanted variability into the survey results. We took steps during data collection and analysis to minimize such errors. In addition to the questionnaire testing and development measures mentioned above, we followed up with nonresponding companies with telephone calls to help them overcome problems they encountered in completing the survey and to encourage them to respond. We also checked and edited the survey data and programs used to produce our survey results. All 159 companies responding to our survey employed a Big 4 firm as their auditor of record. These companies derived an average of 83 percent of their total revenues from operations within the United States and paid, on average, $3.19 million in fees to their auditor of record in the fiscal year prior to the survey. Using Standard Industry Classification (SIC) codes, we found that 149 respondents represented 39 different industry sectors; we could not identify an SIC code for the other 10 respondents. The top 7 industry sectors represented were electric, gas, and sanitary services (17 companies), depository institutions (10 companies), business services (9 companies), industrial and commercial machinery and computer equipment (9 companies), wholesale trade-non-durable goods (9 companies), chemicals and allied products (8 companies), and electronic and other electrical equipment and components, except computer equipment (6 companies). Please complete this questionnaire specifically the U.S. General Accounting Office (GAO), the for the company named in the cover letter, and independent research and investigative arm of not for any subsidiaries or related companies. This questionnaire should be completed by the profession. To provide a thorough, fair, and balanced report historical information on mergers, operations to Congress, it is essential that we obtain the and finance, as well as report the corporate experiences and viewpoints of a representative policy of this firm. sample of public companies. Your company was selected randomly from the enclosed envelope within 10 business days of 2002 list of Fortune 1000 companies. It is receipt. If the envelope is misplaced, our important for every selected firm to respond to ensure the validity of our research. Congress. Telephone: (202) 512-3608 Email: pannorm@gao.gov Thank you for participating in this survey. 1. Approximately what percentage of your company’s total revenues are derived from operations within and outside of the United States? Please enter percentages totaling 100%. of our revenues are derived from operations within the United States of our revenues are derived from operations outside of the United States 2. If your company was founded in the past decade, in what year was it founded? Please enter 4-digit year. 3. What is the name of your company’s current auditor of record and when did this firm become your auditor of record? Please enter name of auditor and 4-digit year hired. 4. What type of services does your auditor of record currently provide to your company? Please check all that apply. 1. Only audit and attest services 2. Tax-related services (e.g., tax preparation) 3. Assistance with company debt and equity offerings (e.g. comfort letters) N=106 4. Other services - please describe: _______________________________________________________________________ 5. Approximately how much were the total annual fees that your company paid to your auditor of record for audit and attest services during your last fiscal year? Please enter approximate dollar figure. Range=$13,807-$62,000,000 6. Starting in 1987, when consolidation of the largest accounting firms began, or since your company was founded (if that occurred after 1987), has your company employed more than one auditor of record? Please check one box. 1. Yes - how many: ________ 2. 7. What were the names and tenures of the most recent previous auditor(s) of record your company has employed since 1987? Please name up to two of the most recent previous auditors and years employed. from (year)_____ to (year)_______ (year)_____ to (year)_______ 8. Which of the following reasons explain why your company changed auditor of record one or more times since 1987? Please check all that apply. 1. Our company had a mandatory rotation policy 2. Expansion of our company required an auditor of record that could meet new demands 3. New regulations forbidding use of auditor for management consulting and other services 4. Fees for audit and attest services 5. Concern about reputation of our auditor of record 6. Our auditor of record was going out of business 7. Our auditor of record resigned 8. Relationship with our auditor of record was no longer working 9. Other – please describe: __________________________________________________________________ 9. If your company previously employed Arthur Andersen as your auditor of record and switched to another firm in the past two years, did you switch to the firm to which your previous Arthur Andersen partner moved? Please check one box. 1. Not applicable – did not employ Arthur Andersen 2. Yes, switched to partner’s new firm 3. No, switched to other firm – Consolidation in the Accounting Profession We are focusing on the trend toward consolidation that has occurred in the public accounting profession starting in 1987, when consolidation activity among the largest firms began, primarily the consolidation of the “Big 8” into the “Big 4.” This section asks you to consider how relationship with its auditor of record, and the audit services it provides, has changed over this time frame. Although a number of factors m ay have influenced these changes, we would like you to assess the influence of consolidation in the accounting profession in particular. answers on your experience in the past decade or, if this is not possible, on the time frame that reflects your experience. 10. How have the fees that your company pays for audit and attest services changed over the past decade? If it is not possible for you to answer for the past decade, please base your answer on the time frame that best reflects your experiences. Please check one box. 1. 2. 3. 4. 5. 1% 11. If your company changed auditors within the last two years, how have the fees your company pays your current auditor of record changed compared to the fees paid to your previous auditor? Please check one box. 1. Not applicable – have not changed auditors ----------------------------------------------------------- 2. 3. 4. 5. 6. 12. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the fees that your company pays for auditing and attest services? 1. Great upward influence 2. Moderate upward influence 3. 4. Moderate downward influence 5. ---------------------------------------------------------- 6. Don’t know 13. Audit quality is often thought to include the knowledge and experience of audit firm partners and staff, the capability to efficiently respond to a client’s needs, and the ability and willingness to appropriately identify and surface material reporting issues in financial reports. Do you believe that the overall quality of audit services your company receives has gotten better or worse over the past decade? Please check one box. 2. 3. 4. ---------------------------------------------------------- 6. Don’t know 1% 14. If your company changed auditors within the last two years, do you believe that the overall quality of audit services your company receives from your current auditor is better or worse than the overall quality of audit services your company received from its previous auditor? Please check one box. 1. Not applicable – have not changed auditors ---------------------------------------------------------- 3. 4. 5. ----------------------------------------------------------- 7. 15. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the quality of audit and attest services that your company receives? 1. Very positive influence 2. 3. 4. 5. Very negative influence ---------------------------------------------------------- 6. Don’t know 16. If you have experienced a change in audit quality, please explain: If you have not experienced a change, please enter “none.” _________________________________________________________________________ 17. Auditor independence is often thought to relate to the accounting firm’s ability and willingness to appropriately deal with (a) financial reporting issues that may indicate materially misstated financial statements; (b) the appearance of independence in terms of the other services a firm is allowed to and chooses to provide to their clients; and (c) how much influence clients appear to have in the audit decisions. Do you believe that your company’s auditor(s) has become more or less independent over the past decade? Please check one box. 1. Much more independent 2. 3. 4. 5. ---------------------------------------------------------- 6. 18. If your company changed auditors within the last two years, do you believe that your current auditor is more or less independent than your previous auditor? Please check one box. 1. Not applicable – have not changed auditors ---------------------------------------------------------- 2. 3. 4. 5. 6. ---------------------------------------------------------- 7. 28. Has the consolidation of the largest accounting firms over the past decade made it harder or easier for your company to satisfactorily select an auditor and maintain a relationship with that auditor? Please check one box. 2. 3. 4. ---------------------------------------------------------- 6. 29. How, if at all, has the consolidation of the largest accounting firms over the past decade affected competition in the provision of audit and attest services? If it is not possible for you to answer for the past decade, please base your answer on the time frame that best reflects your experiences. Please check one box. 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 30. How, if at all, has this change in competition affected each of the following areas? (1) (2) (3) (4) (5) (6) 31. What do you believe is the minimum number of accounting firms necessary to provide audit and attest services to large national and multinational public companies? Please enter a number. 32. What do you believe is the optimal number of accounting firms for providing audit and attest services to large national and multinational public companies? Please enter a number. _________________________________________________________________________ 33. Do you suggest that any actions be taken to increase competition in the provision of audit and attest services for large national and multinational public companies? Please check one box. 3. Don’t know 34. Would you favor or oppose the following actions to increase competition to provide audit and attest services for large national and multinational clients? Please check one box in each row. (1) (2) (3) (4) (5) (6) N=0 35. Do you have any additional comments on any of the issues covered by this survey? Please use the space below to make additional comments or clarifications of any answers you gave in this survey. Thank you for your assistance with this survey! Please return it in the envelope provided. Companies surveyed were invited to add written comments to a number of questions to further explain their answers. Of the 159 respondents that responded to the survey, 149 volunteered written answers to at least one of the eight key open-ended comment questions in our survey: change in audit quality, the number of auditor options, the sufficiency of such options, willingness to use the auditor of a competitor, minimum number of audit firms necessary, optimal number of firms, suggested actions for increasing competition, and any additional comments on the survey. The following tables display selected comments from some respondents to these eight questions. Some of the quotes illustrate typical comments made by several other companies, while others represent a unique viewpoint of only that company. While these specific comments provide valuable insights, the number of comments of a particular type reproduced here is not necessarily proportional to the number of other similar responses, and, therefore, the comments do not represent the variety of opinion that might be found in the population of large public companies as a whole. More respondents said that overall audit quality had gotten better over the past decade than worse (44 percent compared to 18 percent). The reasons behind these ratings are presented in table 2, grouped into summary categories. Almost all respondents—94 percent—indicated that they had three or fewer options from which to choose if they had to change auditors, and 61 percent said exactly three. The explanatory comments we received to that question, shown in table 3, confirm that respondents are almost always referring to the Big 4 firms other than the one they currently employ. As only 8 percent of respondents said they currently use or would consider using a non-Big 4 firm, there were few written explanations for why they thought they had more than three or four options. Those who did explain mentioned the national prominence of the larger second-tier firms and smaller firms with special industry expertise as reasons. Almost half of the respondents (43 percent) said they did not have enough options and desired more. Respondents who said they had enough options said the Big 4 firms were able to meet their needs. However, several of these respondents cautioned that further reductions could be problematic. Those saying the number of firms was not sufficient often took the position that “more competition is always better.” Other comments included that differentiation between the firms’ services was declining, special expertise was not longer readily available, and monopolistic tendencies in setting fees. See table 4. More than 90 percent of our respondents said that their company would choose the auditor of a competitor. A few of those respondents provided explanations as to why they would or would not, as shown in table 5. A large majority (82 percent) of respondents said that the minimum number of firms necessary to provide audit services to large companies such as theirs was four or more. The largest number of responses was received for four or five firms. See table 6. Most (86 percent) respondents said the optimal number of firms was greater than four, although the majority of those responses remained in the five to eight range. See table 7 for selected comments. those that favored action mentioned assisting non-Big 4 firms to by reducing barriers to entry, preventing further consolidation, breaking up the Big 4, and other actions. Many suggested that market forces should be allowed to operate without intervention. See table 8. in the survey. A number of respondents mentioned concerns about further consolidation in the accounting profession, cost and quality, and other issues such as the impact of the Sarbanes-Oxley act and proposals for mandatory audit firm rotation. In addition to those individuals named above, Martha Chow, Marc Molino, Michelle Pannor, David Pittman, Carl Ramirez, Barbara Roesmann, and Derald Seid made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The largest accounting firms, known as the "Big 4," currently audit over 78 percent of U.S. public companies and 99 percent of public company annual sales. To address concerns raised by this concentration and as mandated by the Sarbanes-Oxley Act of 2002, on July 30, 2003, GAO issued a report entitled Public Accounting Firms: Mandated Study on Consolidation and Competition, GAO-03-864 . As part of that study, GAO surveyed a random sample of 250 public companies from the Fortune 1000 list; preliminary findings were included in the July report. This supplemental report details more comprehensively the 159 responses we received through August 11, 2003, focusing on (1) the relationship of their company with their auditor of record in terms of satisfaction, tenure relationship, and services provided; (2) the effects of consolidation on audit fees, quality, and independence; and (3) the potential implications of consolidation for competition and auditor choice. Most of the 159 respondents said that they were satisfied with the current auditor, and half had used their current auditor for 10 years or more. Generally, the longer a respondent had been with an auditor, the higher the overall level of satisfaction. Consistent with high levels of satisfaction, GAO found that, aside from former clients of Arthur Andersen, few respondents had switched auditors in the past decade. When they did, they switched because of reputation, concerns about audit fees, and corporate mergers or management changes. In looking for a new auditor, the most commonly cited factors the respondents gave were quality of service, industry specialization, and "chemistry" with the audit team. Finally, almost all respondents used their auditor of record for a variety of nonaudit services, including tax-related services and assistance with company debt and equity offerings. Respondents had differing views about whether past consolidation had some influence on audit fees, but most believed that consolidation had little or no influence on audit quality or independence. Respondents commented that other factors--such as new regulations deriving from the Sarbanes-Oxley Act and changing auditing standards--have had a greater impact on audit price, quality, and independence. While half of the respondents said that past consolidation had little or no influence on competition and just over half said they had a sufficient number of auditor choices, 84 percent also indicated a preference for more firms from which to choose as most would not consider using a non-Big 4 firm. Reasons most frequently cited included (1) the need for auditors with technical skills or industry-specific knowledge, (2) the reputation of the firm, and (3) the capacity of the firm. Finally, some expressed concerns about further consolidation in the industry and the limited number of alternatives were they to change auditors under existing independence rules.
Both the MS-13 and 18th Street gangs were formed in Los Angeles, California. MS-13 was founded by Salvadoran immigrants, many of whom came to the United States to escape the civil war in their native country in the 1980s. The 18th Street gang was founded primarily by Mexican immigrants in the 1960s, though it currently accepts members from other backgrounds. MS-13’s early membership is reported to have included former guerrillas and Salvadoran government soldiers whose combat experience during the Salvadoran civil war contributed to the growth of the gang’s notoriety as one of the more violent Los Angeles street gangs. The end of the Central American civil wars and changes in U.S. immigration laws helped to facilitate the removal of tens of thousands of Central Americans illegally in the United States to their native countries in the 1990s, including MS-13 and 18th Street gang members who subsequently spread their gang culture and operations to those countries. MS-13 and 18th Street gang members removed from the United States to Central American countries established gangs in those countries. Within the United States, NGIC has reported that MS-13 has between 8,000 and 10,000 members nationally. The FBI has reported that MS-13 operates in at least 42 states and the District of Columbia. Traditionally, in the United States, MS-13 has consisted of loosely affiliated groups; however, law enforcement officials have reported the coordination of criminal activity among MS-13 gang members operating in the Atlanta, Dallas, Los Angeles, New York, and Washington, D.C., metropolitan areas. In the 2009 National Gang Threat Assessment, the NGIC and NDIC indicated that MS- 13 members have been involved in a wide range of crimes within U.S. communities, including homicide, drive-by shootings, assault, robbery, weapons trafficking, the transportation and distribution of drugs, identity theft, and prostitution operations. The 18th Street gang is active in 28 states and has a membership estimated at between 30,000 and 50,000. According to the 2009 National Gang Threat Assessment, in California, for example, about 80 percent of 18th Street gang members are illegal aliens from Mexico and Central America. In the United States, 18th Street gang members have been involved in homicide, assault, robbery, street-level drug distribution, auto theft, and identification fraud. Although estimates vary, in the Central American countries of El Salvador, Honduras, Nicaragua, and Guatemala, USAID has estimated that there are approximately 63,000 gang members, while the U.S. Southern Command (SOUTHCOM) has estimated total gang membership in Central America to be approximately 70,000. According to USAID, the majority of these members belong to MS-13 and 18th Street. Within Central American countries, these gangs engage in a range of criminal and violent acts, including homicide, kidnapping, drug smuggling, and extortion, among other crimes. The NSC is the President’s principal forum for considering national security and foreign policy matters with his senior national security advisors and cabinet officials. The council also serves as the President’s principal arm for coordinating these policies among various government agencies. As such, under its IOCPCC, the NSC coordinated with other federal departments and agencies to develop a strategy to combat the threat of criminal gangs from Central America. Various other federal departments and agencies play key roles in U.S. federal government efforts to address transnational gangs. As shown in figure 1, these departments include DOJ, DHS, State, USAID, and DOD. Within DOJ, seven components have key roles in law enforcement efforts to combat transnational gangs—the Criminal Division; the 93 U.S. Attorneys in 94 judicial districts across the nation that operate with administrative and operational support from the Executive Office for U.S. Attorneys (EOUSA); and four law enforcement agencies—the FBI; Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF); Drug Enforcement Administration (DEA); and U.S. National Central Bureau of Interpol. The Criminal Division, along with the U.S. Attorneys, is charged with enforcing most federal criminal laws and can prosecute a wide range of criminal matters, including those involving transnational gangs and gang members. The Criminal Division’s International Criminal Investigative Training Assistance Program and Office of Overseas Prosecutorial Development, Assistance and Training have been involved in providing antigang training for law enforcement officials and prosecutors from Central America. Also part of the Criminal Division is the Gang Unit, a specialized group of prosecutors charged with developing and implementing strategies to address gangs. In addition to prosecuting gang cases, the Gang Unit prosecutors assist U.S. Attorneys on legal issues and multidistrict cases, as well as work with domestic and foreign law enforcement to coordinate enforcement strategies. The 93 U.S. Attorneys prosecute the majority of criminal cases as well as civil litigation handled by DOJ. EOUSA provides general executive assistance and guidance to U.S. Attorneys’ Offices (USAO) and has a national gang coordinator who acts as a liaison between the USAOs and other DOJ components involved in gang prosecution efforts. The FBI’s transnational gang efforts target violent crime and criminal enterprises associated with transnational gangs. ATF’s primary involvement with MS-13 and 18th Street is related to gang members’ illegal possession of, or trafficking in, firearms. DEA targets gangs in connection with specific drug sources or large-scale suppliers who distribute illicit drugs to the gangs. The U.S. National Central Bureau of Interpol is the point of contact for all International Criminal Police Organization (INTERPOL) matters in the United States, including secure communications with police authorities in INTERPOL member countries. As such, among other things, the National Central Bureau receives and sends out notices to INTERPOL bureaus in other countries concerning information or the location of gang members or suspects involved in gang activities. Within DHS, ICE’s Office of Investigations has a National Gang Unit that manages and coordinates national efforts to combat the growth and proliferation of transnational criminal street gangs. Gang members who are involved in crimes with a nexus to the border, or are foreign-born and are in the United States illegally may be subject to ICE’s dual criminal and administrative authorities that are used to disrupt and dismantle transnational gang activities with criminal prosecutions and removal from the United States. In addition, U.S. Customs and Border Protection (CBP), the DHS component that protects U.S. borders against terrorism, illegal immigration, and drug smuggling, among other threats, has a role in identifying gang members at the borders. Upon the arrest of a suspected gang member, CBP will contact ICE and determine if enforcement action is to be taken by CBP or ICE based upon whether the apprehension took place between the ports of entry or at a port of entry. CBP has developed an Anti-Gang Initiative to improve the agency’s awareness of gangs through increased partnerships with other federal agencies and to provide gang awareness training for its personnel. Two State bureaus, the Bureau of Western Hemisphere Affairs and the Bureau of International Narcotics and Law Enforcement Affairs, are involved in efforts to address gang violence in Central America. The Bureau of Western Hemisphere Affairs is responsible for managing and promoting U.S. interests in the region and fostering cooperation on issues such as drug trafficking and crime. The Bureau of International Narcotics and Law Enforcement Affairs advises the U.S. government on the development of policies and programs to combat narcotics and crime, and works with host nations to strengthen their capabilities so that they can bolster their own effectiveness in fighting drug trafficking and crime including transnational gangs. USAID provides economic, development, and humanitarian assistance to other countries and, with respect to transnational gangs, is the primary agency responsible for managing gang intervention and prevention efforts in Central America. These efforts are carried out principally through the agency’s Bureau for Latin America and the Caribbean and missions in Central America, with technical assistance and support from the Democracy and Governance Office of the Bureau for Democracy, Conflict and Humanitarian Assistance. The agency works with foreign governments and local communities in Central America to support and implement a broad range of programs focused on, among other things, creating employment opportunities and alternatives to participating in gangs. DOD, and specifically SOUTHCOM, has tracked the growth of the gangs in the Central American countries that are within its area of responsibility. Although SOUTHCOM does not have any specific programs in place in Central America to combat transnational gangs, it monitors information on gangs that either pose a threat to the sovereignty of governments in the region or are involved in drug trafficking. U.S.-sponsored antigang programs in Central America are being funded in part by the Mérida Initiative. This initiative was announced by the Bush Administration in October 2007 as a multinational effort to confront criminal organizations whose actions affect Mexico, Central America, and the Caribbean countries of the Dominican Republic and Haiti, and spill over into the United States. Through this initiative, the U.S. federal government is providing equipment, training, and other assistance to help these countries address drug and arms trafficking, bulk cash smuggling, and other crime issues such as gangs and organized crime. For Central America, funding under the Mérida Initiative is administered by State and has been allocated to various areas and efforts, including to combat transnational gangs, improve Central American countries’ judicial systems, enhance airport and border security in the region, refurbish patrol boats used by Central American countries for intercepting drug traffickers in coastal waters, and a broad range of crime prevention programs, including programs directed at youth at-risk. In fiscal year 2008, the Supplemental Appropriations Act appropriated $60 million for the Central American portion of the Mérida Initiative. In fiscal year 2009, the 2009 Omnibus Appropriations Act appropriated $105 million for Mérida Initiative activities in Central America. Up to $83 million was appropriated for Mérida Initiative activities in Central America for fiscal year 2010 by the Consolidated Appropriations Act, 2010. We recently completed work looking at the status of funds for the initiative and have work ongoing examining U.S. counternarcotics and anticrime assistance provided to Mexico under the initiative. We plan to issue a report on this work later this year. As of February 2010, federal agencies were developing a strategy for a regional security initiative in Central America, called the Central American Regional Security Initiative (CARSI). This new initiative is in accordance with direction in the conference report accompanying the Consolidated Appropriations Act, 2010, which removed the Central American portion from the Mérida Initiative and placed funding for Central American programs into the new CARSI. Under this new initiative, Central American programs initially funded by the Mérida Initiative, including antigang programs under the Strategy, would be subsumed into CARSI. According to officials from State and USAID, the CARSI strategy is still under development and officials did not have an estimate as to when it would be completed. In addition to funding provided under the Mérida Initiative, federal agencies have used funding from their operating accounts to implement antigang programs in Central America and the United States. For example, in fiscal years 2007 and 2008 the FBI funded $200,000 and $965,000 from its operating account for establishment of the Transnational Anti-Gang (TAG) unit in El Salvador and the operations of the MS-13 National Gang Task Force, an FBI task force that coordinates FBI-led investigations of MS-13 and 18th Street gangs, respectively. Further, ICE uses funding to conduct transnational gang investigations in the United States and abroad. According to ICE officials, $20.4 million that Congress directed to be used for ICE’s antigang activities in fiscal year 2008 funded 119 positions to expand ICE’s efforts to combat transnational street gangs. Additionally, USAID officials reported starting their gang prevention programs before Mérida Initiative funding became available and have used non-initiative resources to promote antigang and rule-of-law programs in Central America. Various federal departments and agencies under the auspices of the NSC developed an interagency strategy for combating gangs with connections to Central America that defines the roles and responsibilities of federal agencies in carrying out the strategy, identifies the problems and risks associated with the gangs, defines its scope and purpose, and identifies specific activities to be taken to achieve results. However, it lacks other key characteristics, such as providing an approach or framework to include an entity for overseeing implementation, and goals and measures for assessing progress and performance in implementing the strategy. To respond to the threats criminal gangs such as MS-13 and 18th Street pose to the countries in which they operate, U.S. federal agencies developed the Strategy to Combat the Threat of Criminal Gangs from Central America and Mexico (the Strategy). Issued in July 2007, this interagency strategy was developed under the auspices of the NSC’s IOCPCC, comprised of representatives from various federal agencies, including State, DOJ, DHS, USAID, and DOD. The Strategy is designed to combat the threat posed by gangs with links to Central America and Mexico by adopting an approach that integrates law enforcement with youth crime prevention and interventions that provide alternatives to gangs. The Strategy is also designed to be regional in scope, with the United States working with the other countries affected by the gangs to avoid transferring the gang problem to neighboring countries. To implement this approach, the Strategy includes five broad categories under which federal agencies are to take actions to combat transnational gangs—diplomacy, repatriation, law enforcement, capacity enhancement, and prevention—and identifies the activities for agencies to implement under each of these categories. As shown in table 1, for each category, the Strategy identifies agencies that are to implement the individual activities and a lead agency to coordinate these activities. Specifically, the Strategy identifies State as the lead agency for the diplomacy category and, along with USAID, the lead agency for the capacity enhancement category; DHS as the lead agency for the repatriation category and, along with DOJ, the lead agency for the law enforcement category; and USAID as the lead agency for the prevention category. As part of our prior work on desirable characteristics of effective national strategies, we have reported that such strategies are the foundation for defining what agencies seek to accomplish. As such, we found that having characteristics like a description of agencies’ activities, roles, and responsibilities as part of a strategy helps to enhance the strategy’s effectiveness. Strategies that include characteristics such as these provide policymakers and implementing agencies with a planning tool that can better help ensure accountability and more effective results. In addition to defining activities, roles, and responsibilities of participating agencies, the Strategy also defines its scope and purpose and identifies the problems, risks, and threats associated with transnational gangs. In our prior work, we found that desirable characteristics of effective national strategies also include a discussion of purpose, scope, problems, risks, and threats. For example, in defining its purpose and scope, the Strategy notes that effectively addressing the problem of these transnational gangs requires close coordination and information sharing among the affected countries in Central America and Mexico and a comprehensive approach that includes law enforcement, prevention, intervention, rehabilitation, and reintegration for gang members, which the five categories of the Strategy are intended to address. In regards to identifying the problem, the Strategy states that gangs such as MS-13 and 18th Street threaten U.S. regional interests in fostering stable democracies and the U.S. domestic interest in protecting U.S. citizens from gang violence and crime. These characteristics help indicate why the Strategy was developed and identify the specific national issues and threats toward which the Strategy is directed. Even as the Strategy contains several characteristics of effective national strategies, it lacks other characteristics, such as providing an approach for overseeing implementation of programs and efforts across its different categories. Our prior work on effective national strategies found that they include an approach or framework for overseeing their implementation or describe the organizations that will provide oversight, which enhances the accountability of agencies and stakeholders to implement programs as planned. This is especially important for the U.S. antigang strategy given that (1) there are eight federal departments or agencies involved in implementing the Strategy; (2) these agencies have wide-ranging missions and programs—from USAID’s mission to implement gang prevention and youth intervention programs to ICE’s mission to remove foreign-born gang members from the United States; and (3) the Strategy specifies 35 different activities under the five categories that federal agencies are to implement. With regard to oversight, the Strategy itself does not identify an approach or framework for providing oversight across agencies’ implementation of the Strategy’s categories and activities and, according to DOJ, State, and USAID officials, one does not exist. Although the Strategy designates lead departments or agencies for each category, such as State leading the Diplomacy category and USAID leading the Prevention category, the Strategy does not designate an approach to provide oversight of the overall implementation of the Strategy across the Strategy’s various categories. Further, while members of an interagency antigang task force have discussed agencies’ efforts to implement the Strategy, this task force is not intended to provide this oversight. According to DOJ, State, and USAID officials, under the auspices of the NSC, the International Anti- Gang Task Force, which is chaired by DOJ’s Criminal Division and includes representatives from DOJ, DHS, State, and USAID, has responsibility for sharing information on the implementation of the Strategy. However, this task force is not intended to, nor does it, provide oversight for holding agencies accountable for implementation of their activities under the Strategy. Additionally, State’s regional gang advisor stated that no single department or entity has been identified as having oversight responsibility for the Strategy’s implementation. DOJ officials did not know why an oversight mechanism was not included in the Strategy, noting that, as a result, there is no enforcement mechanism to ensure that agencies are implementing their respective parts of the Strategy. USAID officials told us that after the Strategy was developed, the individual within the NSC who had been responsible for coordinating development of the Strategy stated that the council’s IOCPCC was to oversee the implementation of the Strategy; however this did not occur in part because the individual left the council. Additionally, although federal agencies are developing a strategy for the newly formed CARSI, participating agencies have not yet determined the oversight framework, if any, that is to be used for this broader initiative or whether the existing antigang Strategy will be incorporated into CARSI. Thus, it is too early to tell whether CARSI will provide an oversight approach for federal agencies’ antigang programs. Regardless of whether the antigang Strategy is incorporated into the new CARSI, or whether the NSC or some other agency or entity is responsible for oversight, establishing an approach or framework for oversight across the Strategy’s categories could help enhance the accountability of agencies to implement activities as laid out in the Strategy and provide visibility over the extent to which agencies’ individual efforts are achieving their intended results under the Strategy. Our prior work found that effective national strategies set clear goals and related performance measures for assessing progress made in achieving intended results. The Strategy, however, does not identify the goals that are to be achieved through its implementation and the associated measures to track the progress made in achieving those goals, which could be established and monitored through an oversight approach or framework. We have reported that performance measurement is important because decision makers can use performance information to identify problems or weaknesses in programs, identify factors causing problems, and modify processes to address the problems. USAID officials told us that after the Strategy was initially developed, the NSC intended to establish performance measures for implementation of the Strategy, but this did not occur in part because the individual within the NSC who had been responsible for coordinating development of the Strategy left the council. Although the Strategy itself lacks goals and measures to gauge results and assess progress across the Strategy’s categories and activities, State and USAID, for their parts of the Strategy, have begun to develop mechanisms to assess the results of their efforts being implemented under the Mérida Initiative, including the initiative’s antigang programs. For example, for its part, State has drafted four gang-specific performance measures within the broader set of measures it is developing for the Mérida Initiative: (1) number of arrests of suspected gang members completed by police units trained/equipped through Mérida Initiative funding in countergang strategies, (2) number of arrests and prosecutions of gang leaders in the region, (3) number of gang-related crime occurrences and homicides in the region, and (4) number of instances where gang-related information is passed from the TAG to U.S. law enforcement for review/action. State officials noted that the department is working with its embassies to determine if Central American countries will be capable of providing the department with the requisite data needed to determine results and outcomes for these measures, as these countries control much of the data, often in disparate data sets and across various ministries. As of November 2009, State officials also reported that they were in the process of reviewing bids from contractors to develop performance measures for the department’s Mérida Initiative programs, including its antigang programs, and to work directly with the host nations to obtain the necessary data to determine the results and outcomes of the efforts based on these measures. In addition to the measures State is developing to evaluate the results of Mérida Initiative-funded programs in Central America, USAID has developed a Mérida Initiative Central America Results Framework that includes an effect evaluation to be conducted by Vanderbilt University through a contractual arrangement with USAID. The agency intends for this evaluation to assess the long-term effect and measure the results of its programs in Central American communities that are the focus of USAID crime prevention efforts under the Mérida Initiative, including those related to gangs. The evaluation consists of five elements: (1) community surveys, (2) reviews of demographic data in the communities, (3) focus groups, (4) interviews with stakeholders such as community leaders, and (5) community observations such as physical infrastructure. Vanderbilt University officials are to conduct the evaluations every 18 months in communities where USAID-sponsored crime prevention activities have been implemented and communities where no activities have been implemented, with these latter communities serving as control groups in order to establish a baseline. Specifically with respect to the surveys, USAID plans to use the results to gauge the effect of its crime prevention programs through community and citizen perceptions on safety and security. To minimize any duplication and take advantage of survey efforts already underway, USAID officials stated that they plan to incorporate the survey questions on community and citizen perceptions on safety and security as part of a broader survey Vanderbilt University will be conducting in the region in 2010. Although State and USAID have begun to develop mechanisms to help assess the outcomes of antigang programs implemented under the Mérida Initiative, these mechanisms do not encompass all of the Strategy’s categories and activities, nor do they include the antigang programs of other federal agencies, such as those of DOJ and DHS. For example, while the measures State is developing, such as the numbers of arrests of gang members and leaders in the region, relate to the law enforcement, capacity enhancement, and prevention categories of the Strategy, these measures do not encompass the diplomacy or repatriation categories of the Strategy. According to State and USAID officials as well as officials from the FBI and ICE, State and USAID have not consulted or worked with DOJ and DHS agencies such as FBI and ICE in developing these performance measures because State’s and USAID’s measures are intended to encompass only their own programs and efforts. According to State, USAID, DOJ, and DHS officials, each agency focuses on developing performance-related goals and measures for its own programs as opposed to other agencies’ antigang programs for which it is not responsible, and, therefore, less familiar. As a result, the performance measures State and USAID are developing cannot serve as overall indicators of the federal government’s progress in implementing the Strategy as they do not take into account all of the federal agencies’ antigang programs to be implemented under the Strategy’s five categories, including those programs led by DOJ and DHS. In the absence of goals and performance measures or other mechanisms for monitoring and assessing the progress and performance of agencies’ antigang programs across the categories of the Strategy, it will be difficult for the federal government to determine if the overall interagency antigang effort is achieving the intended results and to hold agencies accountable for implementing the Strategy. Federal agencies have implemented a variety of programs to carry out the Strategy and combat transnational gangs with connections to Central America. To coordinate their implementation of antigang programs, agencies use a variety of mechanisms such as interagency committees and task forces. However, for the antigang unit in El Salvador, coordination among the FBI, ICE, and Salvadoran law enforcement in sharing investigative information on gangs could be enhanced by reaching agreement on ICE’s participation in the unit. Further, although agencies have taken steps to develop performance measures and obtain data on those measures to track the results of programs, agencies are just starting to collect performance data due to the early stage of implementation of most of these programs. Additionally, federal agencies have identified various factors that are largely outside their control and that can affect their implementation of programs, such as challenges facing Central American countries in sustaining antigang programs. To carry out the Strategy and combat transnational gangs with connections to Central American countries, federal agencies have developed and implemented a variety of programs in the United States and in host countries in the region, such as El Salvador and Guatemala. This variety of programs reflects the different categories of the strategy and includes diplomatic efforts to establish a coordinated approach to the gang problem; efforts to facilitate the repatriation of gang members who are in the United States illegally; mechanisms to facilitate the sharing of investigative information between U.S. law enforcement agencies and foreign law enforcement agencies; programs to provide training to Central American law enforcement officials; and programs to provide recreational and vocational opportunities for at-risk youth, among others. Additional details on federal agencies’ antigang programs under each category of the Strategy are as follows: Diplomacy: State has led efforts to engage diplomatically with Central American countries to discuss gang issues. The department has led discussions with member countries of the Central American Integration System. Under this initiative, the United States and Central American countries first held discussions regarding regional gang threats in July 2007, at which time the United States announced the Strategy. The countries held a second, and the most recent, dialogue in December 2008, which focused on discussing practical measures to combat the threats of criminal gangs, narcotics trafficking, and illicit trafficking of firearms in Central America. At the conclusion, all participating countries signed a communiqué pledging their continued support in the fight against transnational threats, including gangs. Repatriation: ICE has implemented the Electronic Travel Document system to facilitate the issuance of travel documents for the removal of illegal aliens, including gang members, to El Salvador, Guatemala, and Honduras. Under this program, ICE electronically sends travel document applications to the consular officials of these countries; these officials can then electronically sign and certify the documents stating that the countries will receive the illegal aliens to be removed. These documents are available to ICE electronically through the Electronic Travel Document system. This program eliminates the need for consular officials to visit in person the individual awaiting removal from the United States before issuing documents, helping to reduce the amount of time it takes for ICE to receive travel documents from foreign countries and ultimately remove illegal aliens. According to ICE officials, the program can eliminate approximately 5 to 7 days that an alien would spend in detention, thus decreasing the cost incurred by the government. Law Enforcement: In 2005, ICE implemented Operation Community Shield—a nationwide initiative to arrest and remove criminal alien gang members from the United States. ICE began the operation to target violent transnational street gangs through the use of ICE’s broad law enforcement powers to identify, prosecute, and ultimately remove gang members from the United States. Although initially focused on MS-13, ICE expanded Operation Community Shield to target all transnational criminal street gangs, prison gangs, and outlaw motorcycle gangs. For its part, the FBI has implemented various programs to facilitate the exchange of information, such as criminal histories of suspected gang members, between law enforcement agencies in the United States and Central American countries. For example, in 2007, the FBI established a joint U.S.- Salvadoran Transnational Anti-Gang unit in El Salvador—called TAG—to exchange information on gangs and gang members between the Salvadoran national police and U.S.-based law enforcement agencies for use in criminal investigations and gang-member prosecutions in both countries. The unit includes investigators and analysts from the Salvadoran national police, prosecutors from El Salvador’s Attorney General’s Office, and two FBI agents. The TAG unit’s exchange of information on gang members has aided U.S. gang investigations in locations such as Charlotte, North Carolina; Omaha, Nebraska; and Los Angeles, California. The FBI plans to establish units like this in Guatemala and Honduras. Further, in 2006, the FBI began the Central American Fingerprint Exploitation program to collect and store existing criminal fingerprint records and other biometric information from the countries of Mexico, El Salvador, Guatemala, Belize, and Honduras in FBI databases and make them available to all U.S. local, state, and federal law enforcement agencies. These records are searched in the FBI’s Integrated Automated Fingerprint Identification System, with resulting matches shared with the contributing country for investigative purposes. The FBI has deployed the system in El Salvador, conducted assessments to determine how to deploy the system to Belize and Panama, and plans to conduct more assessments for deploying the system to Guatemala and Honduras. In addition, beginning in 2008, the FBI, with funding from State, has implemented the Central American Law Enforcement Exchange program wherein law enforcement personnel from El Salvador, Guatemala, and Honduras have visited locations in the United States to receive antigang training and share investigative practices with U.S. law enforcement personnel. In exchange, law enforcement personnel from the United States have visited El Salvador to provide antigang training and share practices with Central American police. Also related to law enforcement, ICE, in conjunction with State, established an international gang task force in Honduras in January 2010. Comprised of four Honduran police officers and one ICE agent, the task force is charged with developing intelligence to initiate and support gang investigations in the United States and Honduras. Capacity Enhancement: To help enhance the capacity of Central American governments to address gangs, beginning in 2006, State, DOJ, DHS, and other agencies have provided antigang training courses to Central American law enforcement officials through the International Law Enforcement Academy in El Salvador. The academy provides law enforcement training to officials from countries in Central and South America and the Caribbean. The training courses have focused on various aspects of gang enforcement efforts, such as police investigative techniques, prosecution, witness protection, and prison gang management, and participants have included police, prosecutors, judges, prison staff, border agents, and prevention and rehabilitation officials from El Salvador, Guatemala, Honduras, Mexico, Panama, and Belize. As another example of capacity enhancement, USAID has provided technical assistance and training to police, prosecutors, and judges, among others, to reform justice-sector institutions in El Salvador to help improve the investigation, prosecution, and prevention of crimes including those committed by gangs. Prevention: USAID has implemented gang prevention, intervention, and rehabilitation programs in Central American countries to provide youth with alternatives to joining gangs and assist former gang members’ reentry into society. For example, through partnerships with faith-based and nongovernmental organizations and local governments in Central America, USAID has started youth centers in specific communities to provide a safe environment for recreational and vocational opportunities for young people. Figure 2 shows individuals participating in activities at these youth centers in El Salvador. Additionally, USAID has sponsored a community-based policing program in Guatemala to improve the relationship between the police and local citizens by establishing collaborative partnerships between law enforcement and the communities they serve to solve problems and increase trust. According to USAID, the agency plans to expand the community policing program to five new communities in Guatemala as well as five communities each in El Salvador and Panama. Federal agencies have taken action to coordinate their antigang programs and share information with each other through various interagency and coordinating groups. For example, DOJ has established several entities to coordinate and share information on gang enforcement efforts, including transnational gangs, among DOJ and DHS component agencies. These entities include the Anti-Gang Coordination Committee that is comprised of representatives from a variety of DOJ components and agencies, as well as representatives from DHS’s ICE, and meets at least quarterly each year to report on the status of antigang efforts and disseminate information for coordination. Another entity used to coordinate, share information and intelligence on gangs, and serve as a deconfliction center for gang operations is the National Gang Targeting, Enforcement, and Coordination Center (GangTECC). GangTECC is comprised of participants from the FBI, ATF, DEA, and ICE, among other DOJ and DHS components, and has responsibility for coordinating multijurisdictional investigations of all gangs except FBI-led investigations involving the MS-13 and 18th Street gangs. In addition, the FBI’s MS-13 National Gang Task Force has only FBI participants and is responsible for coordinating FBI’s multijurisdictional investigations involving MS-13 and 18th Street gangs. Further, within Central American countries, federal agencies have mechanisms for coordinating and sharing information on antigang programs. For example, in El Salvador at the U.S. embassy, U.S. government agency officials who are involved in implementing antigang programs, such as officials from DOJ, DHS, State, USAID, and DOD, hold regular meetings to discuss antigang activities and coordinate their implementation efforts. Appendix IV provides additional information on the roles and responsibilities of these and other headquarters-level coordinating entities as well as task forces that coordinate antigang efforts at the field level within the United States. In July 2009, we reported on the benefits and challenges associated with some of these various coordinating mechanisms. Specifically, we reported that entities such as the Anti-Gang Coordination Committee, GangTECC, and the MS-13 National Gang Task Force provide DOJ and DHS with a means to operate across agency boundaries and facilitate communication among participating agencies at the headquarters level. However, we also reported that while some overlaps in mission may be appropriate, the entities had not clearly identified their roles and responsibilities, resulting in possible gaps or unnecessary overlaps in agencies’ coordination and sharing of information on gang enforcement efforts, including those involving transnational gangs. Specifically, we reported that GangTECC and the MS-13 National Gang Task Force had overlapping missions and responsibilities for coordination and deconfliction of multijurisdictional investigations involving the MS-13 and 18th Street gangs. The two entities had these overlaps in part because the MS-13 National Gang Task Force already existed when GangTECC was established in 2006 and was not dismantled or folded into GangTECC at that time. We reported that both entities risked unnecessary federal resource expenditures to fund two entities when a single group could be more efficient. We recommended that DOJ, in consultation with DHS, articulate and differentiate roles, responsibilities, and missions of headquarters-level entities, which would strengthen headquarters-level coordination efforts to help ensure that they are not expended on overlapping missions. DOJ agreed with our recommendation and as of February 2010, DHS and DOJ officials reported that they are discussing ways to streamline processes, modify policies, and establish cross-cutting performance measures for federal gang programs. At the field level, FBI and ICE could strengthen their coordination and sharing of information on gang members and investigations specifically in El Salvador by reaching agreement on ICE’s participation in the TAG. In El Salvador, both ICE and FBI contact the Salvadoran national police to request information and intelligence on gangs to assist in the agencies’ gang investigations. The FBI makes its requests directly through its agents assigned to the TAG, while ICE’s requests for Salvadoran national police information on gangs or gang members are sent through ICE’s country attaché in El Salvador who forwards them to the FBI agents at the TAG. The FBI agents then pass the requests to Salvadoran national police officials if the FBI agents do not have the information needed to fulfill ICE’s requests. FBI and ICE officials stated that this process for coordinating information requests for the Salvadoran national police through the TAG has worked well, but that the process could be further strengthened by ICE’s participation in the TAG unit. Our work on effective interagency coordination has shown that collaborating agencies should organize joint and individual efforts and facilitate information sharing. Collaborating agencies also look for opportunities to leverage each others’ resources, thus obtaining additional benefits that would not be available if they were working separately. Coordinating in this way could yield benefits in terms of leveraging efforts already underway and minimizing any potential unnecessary duplication in federal agencies’ requests for information on gang members or gang investigations. Various U.S. and Salvadoran officials have cited potential benefits that could be gained from ICE and FBI both participating in the TAG. The director of El Salvador’s national police stated that he would like to see federal law enforcement agencies other than the FBI involved in the TAG unit, particularly ICE, because of ICE’s role in managing the removal of gang members from the United States to El Salvador. The unit chief of FBI’s MS- 13 National Gang Task Force stated that the FBI would also benefit from ICE participating in the TAG to assist in deconflicting enforcement operations between the FBI and ICE in El Salvador. ICE’s El Salvador country attaché stated that having an ICE agent at the TAG would streamline the current process since an ICE agent would be working directly at the TAG and be able to better identify possible connections between FBI and ICE gang cases and requests for information, thereby expediting information sharing. The FBI and ICE have discussed signing a joint memorandum of understanding to provide parameters for ICE’s participation in the TAG. According to the unit chief of the FBI’s MS-13 National Gang Task Force, in 2008 the FBI presented ICE with a memorandum of understanding for ICE to participate in the TAG unit in which the FBI legal attaché would manage the TAG and coordinate TAG gang investigations with ICE. The FBI memorandum required ICE to coordinate all of its activities related to the TAG with the FBI legal attaché. However, according to the head of ICE’s National Gang Unit, ICE would like to participate in the TAG more as an equal partner as opposed to being subordinate to the FBI. ICE officials stated that clarification is needed in regards to the administrative details on placing an ICE agent at the TAG such as housing and the location of the ICE agent’s office, as well as agreement on the extent to which an ICE agent assigned to the TAG could focus on work for ICE- specific investigations as needed. To help obtain this clarification and try to reach consensus on ICE participation in TAG, ICE officials stated that as of February 2010 they have not yet completed the process of drafting language to clarify their points of concern in the memorandum of understanding and plan to provide this draft language, once completed, to the FBI for its consideration. Further, the FBI and ICE have initiated discussions to conduct an assessment of ICE’s possible participation in TAG by having an ICE agent who would conduct the assessment assigned to the unit on a temporary basis. However, as of February 2010, ICE and FBI have not yet reached agreement on this temporary assignment and ICE’s plans to conduct the assessment because the two agencies disagree about the type and scope of the work that the agent would conduct for the assessment. The FBI and ICE could strengthen their coordination on gang investigations and enhance the efficiency of their existing process for exchanging information through the TAG by reaching consensus on ICE’s participation in the unit. By reaching agreement on ICE’s role, which the FBI and ICE have been considering since 2008, the two agencies would be in a better position to leverage their existing resources and information- sharing processes for gang investigations with a nexus to El Salvador. Earlier in this report, we discussed federal government efforts to measure the results achieved with the overall Strategy. Separately, federal agencies have established performance measures, such as numbers of arrests of gang members, for assessing their own individual antigang efforts. However, as most of these programs are in the early stages of implementation, agencies’ data on their programs’ performance cannot yet be used to assess the level of activity or program results across time. For example, USAID officials said that because most of the Mérida Initiative funding for its Central American programs was not released to USAID field missions until July and August 2009, programs funded by the Mérida Initiative have yet to have any appreciable results to measure. In addition, several antigang programs in Central America have yet to be fully implemented. For example, ICE and the FBI are seeking to establish the Criminal History Information Program in El Salvador, Guatemala, and Honduras pending the hiring of needed analysts and completion of interagency agreements—expected to occur by the summer of 2010. Although federal agencies’ performance data do not yet indicate the level of progress or results achieved over time, federal agencies have begun to report data on their levels of activity to date. For example, State has established measures for the antigang training courses offered through the International Law Enforcement Academy in El Salvador, including tracking data on the number of gang classes offered and the number of participants successfully completing them. According to State, the academy has offered 10 gang-related training courses since 2006 with a total of 416 participants from various countries including El Salvador, Honduras, Guatemala, and Mexico. In addition to these measures, State asks participants to complete course evaluations and uses these evaluations to make modifications to the antigang training courses, such as changes in curriculum. USAID has also established measures for its own antigang programs, as shown in table 2. For example, USAID has established measures for its community policing program, such as the number of communities that have implemented community policing programs. USAID also requires program contractors to develop performance indexes for their respective programs. For example, the private contractor conducting USAID’s Community-based Crime and Violence Prevention Project plans to review local government records to measure the crime rate in project-targeted communities. For other efforts related to youth outreach centers, performance indicators tracked by program contractors include measures such as the number of youth who have received a job through the support of the outreach center or because of the skills acquired through the center. With regard to its gang enforcement efforts, ICE reports on the number of gang-related criminal and administrative arrests, among other measures. As shown in figure 3, the number of criminal and administrative gang- related arrests made by ICE has generally increased since fiscal year 2006, the first full year that it compiled this information. As shown in table 3, the FBI has also established various measures for each of its primary programs to combat transnational gangs. Among others, these measures include the number of gang cases worked and the number of countries and officials participating in the Central American Law Enforcement Exchange Program. In addition to these measures, the FBI has prepared after-action reports for the Central American Law Enforcement Exchange Program that summarize the results of the exchanges, including feedback from program participants, and has used these reports to modify the program. For example, according to the FBI, future exchange classes will include more practical and operational experiences (such as ride-alongs with local police officers and observation of police operations). Moreover, future classes will include more allowances for travel time and representation from more U.S. and foreign police departments. For the TAG, the FBI did not start to collect data for the unit’s measures, such as the number of requests for assistance the unit receives, until April 2008, about 6 months after the unit was established. According to FBI officials, no systematic log or other mechanism was used to track and record data on the TAG’s activities from September 2007 through April 2008, because, at the start-up of the unit, agents were assigned on a temporary basis and were primarily focused on establishing and initiating the operations of the unit. Thus, the FBI lacks information about the unit’s activity levels for its first 6 months of operation. Since FBI agents have been stationed at the TAG for 2-year, long-term rotations, these agents have developed and maintained a log to track the unit’s activities and assistance provided to FBI offices and other law enforcement agencies. This data maintained by the TAG is used by FBI, particularly the MS-13 National Gang Task Force, to identify trends in where transnational gang members are located and traveling to and proactively target specific geographic areas for gang education or enforcement activities. Even as agencies have worked to implement antigang programs in Central America, various factors in the region largely outside of the control of U.S. agencies pose challenges to the implementation of the programs. These factors include: the ability of host countries to sustain programs after U.S. support ends; Central American law enforcement personnel issues; and legal restrictions in Central American countries, such as El Salvador, that diminish the benefits of efforts. Sustainability of programs by host countries: Federal agencies have identified challenges facing Central American countries in sustaining antigang programs currently being initiated and implemented in the region. Specifically, State officials identified several factors outside the control of U.S. agencies that contribute to the uncertainty as to whether Central American countries will be able to sustain antigang programs over the long term. These officials reported that the ability of partner countries to take responsibility for managing and supporting U.S.-funded antigang programs is hampered by the fact that these countries often do not have the financial resources to sustain the programs in the absence of U.S. funds. State officials said that the ability of foreign countries to sustain antigang programs is hindered by other factors as well, such as corruption throughout the countries’ police and judiciary structures; a lack of investment in police training to make the countries’ police forces more professional and accountable; the countries’ inability to provide police forces with equipment such as communications gear and transportation assets; and the ability of the gangs to quickly adapt to law enforcement strategies. To help address these issues, federal agencies have taken steps prior to implementing these programs to plan for how the programs will be sustained, particularly after U.S. federal funding ends. For example, State officials said that when the department first developed and implemented these programs, it took into account the ability of foreign countries to take over and manage antigang programs initially funded and managed by the United States. Specifically, the officials noted that they worked directly with host country officials to gain an understanding of what programs or efforts the countries needed and what the countries might be able to support, both in terms of resources available and a supportive political climate, and then used that as a starting point to identify and shape the efforts that would receive U.S. support. According to USAID officials, USAID also took similar steps to work with host country officials to identify antigang efforts the countries had planned or were already underway for which USAID could provide additional support. Officials stated that they chose this approach because they sought to support local initiatives and programs whenever possible, as programs already planned or implemented by the host countries are more likely to be sustained by the countries themselves. Rather than setting up similar programs that would compete for resources, providing support for the host countries’ programs helped to broaden the reach of the programs into additional areas that the host countries may not have had resources for otherwise. Further, USAID officials reported that they coordinated with the host countries and other donors to ensure that the host countries would assume responsibility for the activities when USAID funding expires. In regard to FBI-led efforts such as the TAG, FBI officials stated that they also considered the ability of the foreign government to support these efforts during negotiations between the foreign governments, such as El Salvador, and the FBI to establish the units. To help prepare for sustaining antigang programs over the long term, federal agencies have also taken action after these programs have been implemented. For example, at the country level, USAID officials work directly with host government partners and with other donors through regular donor coordination groups and meetings to identify best practices for sustaining efforts and incorporating those practices into programs. Further, FBI officials have discussed with the Salvadoran government ways for the government to provide the resources and commitment needed to help sustain the TAG unit over the long term. According to the unit chief of the MS-13 National Gang Task Force who manages the TAG, other than what the FBI pays for in the salaries and living expenses of the FBI agents based at the TAG, the Salvadoran government provides the other resources necessary to sustain the TAG’s operations. Screening process for law enforcement personnel and transfer of personnel: Finding a sufficient number of Central American law enforcement personnel who can pass the screening process required to participate in U.S. investigative and information-sharing programs can complicate the implementation of those programs. For example, in order to become part of the FBI’s TAG unit, police officers must receive a background screening before initially joining the unit and undergo a polygraph every 6 months thereafter. Under the rules of the TAG, if officers fail the polygraph, they must leave the unit. FBI officials stated that, as was the case in El Salvador when they set up the TAG in that country, they expect to face challenges in identifying and successfully screening a sufficient number of Guatemalan and Honduran police officers to participate in the units planned for those countries. To address this challenge, the FBI is planning to establish smaller TAG units in Guatemala and Honduras (10 officers each instead of the 20 stationed at the Salvadoran TAG) and set aside more time and resources for screening Guatemalan and Honduran police officers for the units. FBI officials also stated that one disadvantage of the first officer exchange with El Salvador was that of the four participating Salvadoran police officers, only two officers continued to work on gang investigations in El Salvador after the exchange was completed with the other two being transferred to different areas within the Salvadoran police force. Although the FBI has an agreement with participating countries that requires exchange participants to be involved in gang investigations for at least 2 years after the exchange has concluded so they can put into practice the training they received and share it with their colleagues, FBI officials noted that there is little they can do to ensure participating countries abide by this requirement. Legal restrictions: According to officials we interviewed from the FBI and DOJ’s Criminal Division, Central American countries’ laws can also pose challenges for conducting gang investigations in the region, or for federal agencies that may seek the extradition of individuals to face trial for crimes committed in the United States. According to an FBI official with the MS-13 National Gang Task Force, legal restrictions in countries such as El Salvador do not permit law enforcement to conduct electronic surveillance or wiretaps as part of their investigations. According to this official, while this restriction has not hindered the ability of the Salvadoran national police to conduct investigations, it takes more time and effort to develop and corroborate the evidence through the use of other investigative methods. According to an official from DOJ’s Criminal Division, a recent amendment to the Salvadoran constitution now allows the use of electronic surveillance as a tool in criminal investigations and the Salvadoran National Assembly is currently considering implementing legislation. In regard to the extradition of gang members wanted in the United States, FBI officials noted that countries such as El Salvador have laws that prohibit the extradition of individuals to other countries where they could face a more severe punishment than would be given in El Salvador for the same crime. As an example, the unit chief of the MS-13 National Gang Task Force stated that the FBI had requested the extradition of a gang leader imprisoned in El Salvador for prosecution in Baltimore, Maryland, where the individual would have likely faced a longer prison sentence for the crime. However, the FBI’s request for extradition was denied, preventing the gang leader from facing the charges against him in the United States. Nevertheless, DOJ’s Criminal Division reported that on December 22, 2009, the Supreme Court of El Salvador voted to permit the first extradition of a Salvadoran national pursuant to the extradition treaty between the United States and El Salvador. Officials explained that future extraditions may still be limited by the penalties applicable to extradited individuals, such as the death penalty and life imprisonment. To help address these challenges, DOJ’s Criminal Division continues to work with El Salvador under the existing treaty to facilitate extradition between the United States and El Salvador. Despite these challenges, officials from the agencies we interviewed stated that they continue to work to address or mitigate the effect of these challenges and to ensure their antigang programs are both sustainable and effective. Given the rapid growth of transnational gangs, their propensity for violence, and the public safety threats they pose, the response to these gangs requires a comprehensive and collaborative approach on the part of federal agencies that have different responsibilities and missions for combating transnational gangs. The Strategy to Combat the Threat of Criminal Gangs from Central America and Mexico (the Strategy) provides a roadmap for federal departments and agencies to follow in designing and implementing efforts to combat the gangs. However, while individual federal agencies may be familiar with the implementation of their own antigang programs, or those in the Strategy for which they are responsible, they do not necessarily have the visibility across the Strategy’s categories to determine the extent to which the broader strategy is being implemented. Regardless as to whether it remains a separate effort or is incorporated into the broader CARSI being developed, incorporating an approach for overseeing the interagency effort to combat transnational gangs would help provide visibility by designating an agency or entity to ensure that the Strategy is being implemented as planned. In addition, as the Strategy and related antigang efforts are implemented, it will be important to be able to track the effect the Strategy as a whole is having against the transnational gang problem. By establishing performance goals and measures or other mechanisms to evaluate the progress made in implementing the Strategy, federal agencies, Congress, and other stakeholders could have more specific information relating to agencies’ performance under the Strategy, thereby enabling them to make more informed decisions as to what adjustments to the Strategy might be necessary, if any, to achieve its desired effect. Relatedly, while federal agencies have taken various actions to collaborate on their transnational antigang efforts, agencies such as the FBI and ICE can take steps to further resolve their roles and participation in the TAG unit in El Salvador, especially given that the FBI and ICE have been discussing this participation since 2008. By reaching agreement on ICE’s participation in the unit, both agencies could realize additional information-sharing benefits not only with each other, but with foreign counterparts, and help ensure that resources to combat gangs in El Salvador are used efficiently and effectively. To strengthen oversight and accountability for implementation of the Strategy to Combat the Threat of Criminal Gangs from Central America and Mexico (the Strategy), we recommend that the Special Assistant to the President for National Security Affairs, in conjunction with DOJ, DHS, State, USAID, and DOD, revise the Strategy to include, or include in the CARSI if the Strategy is incorporated into that initiative an approach or framework for overseeing implementation of the Strategy and antigang efforts in Central America, and performance goals and measures to assess progress made in achieving intended results under the Strategy. To strengthen federal agencies’ coordination of antigang efforts and maximize use of federal law enforcement resources in El Salvador, we recommend that the Attorney General and the Secretary of Homeland Security reach agreement on ICE’s role and participation in the TAG unit. We provided a draft of this report for review to the Departments of Defense (DOD), Homeland Security (DHS), Justice (DOJ), and State (State); the U.S. Agency for International Development (USAID); and the National Security Council (NSC). DHS, DOJ, and USAID provided written comments on the draft report. DOD, State, and the NSC did not provide comments. In their written comments, DHS and DOJ concurred with our recommendation that the Attorney General and the Secretary of Homeland Security reach agreement on ICE’s role and participation in the TAG unit in El Salvador, and outlined steps they have begun to take to address this recommendation. For example, DHS commented that ICE officials have reviewed the FBI’s memorandum on the TAG to clarify the roles and responsibilities of ICE and the FBI within the unit in El Salvador. DOJ also commented that any agreement on DHS’s participation in the TAG unit in El Salvador should also address DHS’s participation in future TAG units that the FBI expects to create in Guatemala and Honduras. In its written comments, USAID commented that it acknowledges the importance of increased interagency coordination to help bolster U.S. government efforts to address the threat of criminal gangs in Central America and the United States. DHS’s, DOJ’s, and USAID’s written comments are contained in appendixes VI, VII, and VIII, respectively. We also incorporated technical comments provided by DHS, DOJ, and USAID as appropriate. We are sending copies of this report to the Special Assistant to the President for National Security Affairs; the Attorney General; the Secretaries of Homeland Security, State, and Defense; the Administrator of USAID; selected congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Eileen Larence at (202) 512-8777 if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. Federal agencies have specifically identified two primary gangs with connections to Central America—Mara Salvatrucha (MS-13) and 18th Street—as posing the most significant public safety threats to the United States and Central American countries among transnational gangs. This appendix summarizes information federal agencies have on the leadership structure and transnational criminal activities of these two gangs. Federal agencies have reported that these gangs generally do not have a centralized leadership structure. Rather, most gang cliques—or local gang groups—tend to operate in specific areas or communities in the United States and Central America and have autonomy to function independently without reporting to a central authority or leadership. However, law enforcement officials have reported that in some gang cliques activities are directed by gang leaders incarcerated in Salvadoran prisons. These leaders communicate by cell phones or coded written messages that are smuggled in and out of the prisons. Of the five U.S. locations we visited to discuss gang investigations with federal, state, and local law enforcement, three reported instances in which gang leaders incarcerated in El Salvador directed gang members to organize new cliques, coordinate with cliques in other parts of the country, and carry out criminal activities, including murder. For example, an investigation in Charlotte, North Carolina, found that MS-13 clique leaders were sent to establish cliques at the behest of leaders in El Salvador. Further, according to the FBI, cliques convene periodically with gang leadership to determine future criminal activities and punishments for individual delinquent gang members. Transnational gangs engage in criminal activities that not only affect local communities in the United States and Central American countries, but that also stretch beyond U.S. and Central American borders. For example, in its 2006 Central America and Mexico Gang Assessment, U.S. Agency for International Development (USAID) reported that gang activity has transcended the borders of Central America, Mexico, and the United States and evolved into a transnational concern. USAID reported that while gang activity used to be territorially confined to local neighborhoods, globalization, sophisticated communications technologies, and travel patterns have facilitated the expansion of gang activity across neighborhoods, cities, and countries. Further, the 2009 National Gang Threat Assessment indicated that U.S. street gangs are expanding their influence in most regions and broadening their presence outside the United States to develop associations with criminal organizations in Mexico and Central America. According to federal agencies, major cross- border criminal activities in which transnational gangs engage include drug trafficking and distribution, human smuggling, and extortion, among others. With regard to drug trafficking and distribution, assessments conducted by the National Gang Intelligence Center, the National Drug Intelligence Center, the U.S. Agency for International Development, and the Federal Bureau of Investigation (FBI) indicate that connections exist between drug trafficking organizations and MS-13 and 18th Street gangs and these connections may be expanding. Several agencies also suggested that both gangs’ connections with prison gangs such as the Mexican Mafia may provide a link to drug trafficking organizations, as one of the prison gangs’ main sources of income is extorting money from drug distributors outside prison and distributing various narcotics within and outside the prison system. Further, the U.S. Southern Command (SOUTHCOM) has reported that drug trafficking organizations have worked with MS-13 and 18th Street gangs to distribute their narcotics in Central America and the United States. MS-13 and 18th Street gangs have also been identified as engaging in retail street distribution of narcotics in the United States, for example. Although connections between gangs and drug trafficking organizations exist and may be growing, these connections may not yet reflect large-scale, organized cooperation between gangs and drug- trafficking organizations. For example, Drug Enforcement Administration (DEA) officials stated that while individual gangs and gang members might work with drug trafficking organizations for their own personal gain, DEA has not observed or obtained evidence of high-level, organized cooperation between gangs and drug trafficking organizations. Likewise, the United Nations Office on Drugs and Crime has reported that because the Central American drug market is too small for gangs to develop into major distributors, the gangs lack the maritime capabilities required for the transport of drugs from Central America, and gangs have limited involvement in the U.S. narcotics market, the link between Central American gangs and drug trafficking organizations may not yet be well- developed. In addition to their involvement in drug trafficking and distribution, transnational gangs have also been involved in other cross-border crime, including alien smuggling and extortion. For example, the 2009 National Gang Threat Assessment indicated that U.S.-based gang members are increasingly involved in cross-border criminal activity that includes smuggling illegal aliens from Mexico into the United States. Moreover, U.S. Customs and Border Protection and the FBI reported that MS-13 is known to smuggle persons from Central America into the United States from Mexico. The FBI has also reported that 18th Street may be involved in human trafficking and alien smuggling. However, the FBI stated that it is difficult to determine gang members’ roles—whether they are organizing the border crossings, smuggling other aliens into the country, or being smuggled themselves. In addition, federal agencies have reported that extortion and retail drug distribution provide financial support for both MS-13 and 18th Street. Cliques have been reported to extort those who conduct illicit activities themselves, such as prostitution or drug distribution entities, within the gangs’ territory. In the United States, transnational gang members are known also to extort money from businesses that operate within gang territory in exchange for not inflicting harm on the businesses or those businesses’ owners, family members, or workers. Both MS-13 and 18th Street gangs also conduct extortion across international borders. For example, gang members in the United States may extort immigrants with threats that they will retaliate against family members in Central America if law enforcement is notified. In some instances, the gangs have coupled extortion with kidnapping. The FBI has reported that MS-13 has kidnapped illegal immigrants and then extorted money from their families for the immigrants’ safe return. Finally, with regard to links to terrorists, federal agencies have found no indication that U.S. gangs with transnational ties have routine relationships. According to the National Gang Intelligence Center (NGIC), three basic types of gangs have been identified by gang investigators: street gangs, prison gangs, and outlaw motorcycle gangs. The focus of this report has been on gangs with ties to Central America, such as Mara Salvatrucha (MS- 13) and 18th Street, which belong to the “street gang” category. However, all three categories contain gangs with members who are either present or criminally active (or both) in more than one country. This appendix provides examples of other gangs in these categories, including descriptions of their transnational activities. Most of the information presented in this appendix is from the April 2008 Attorney General’s Report to Congress on the Growth of Violent Street Gangs in Suburban Areas and supplemented by information obtained from interviews with agency officials. According to federal law enforcement, street gangs are typically associated with a particular neighborhood, town, or city, which they may incorporate into the name of their gang. However, law enforcement officials report that several street gangs have attained regional or national status and operate in a number of states throughout the country. Two examples of these street gangs that also have transnational connections include the following: Florencia 13: Florencia 13 originated in Los Angeles in the early 1960s. Gang membership is estimated to be more than 3,000. This gang operates primarily in California, but is expanding to other states. Drug trafficking is a primary source of income for the gang whose members smuggle cocaine and methamphetamine from Mexico into the United States for distribution. Florencia members are also involved in other criminal activities including assault, drive-by shooting, and homicide. Latin Kings: Latin Kings is a collection of over 160 structured gangs, referred to as chapters, operating in 158 cities in 34 states in the United States. The gang’s membership is estimated to be 20,000 to 35,000. Most members are Mexican-American or Puerto Rican males whose main source of income is street-level drug sales. The gang obtains drugs primarily from Mexican drug trafficking organizations that operate along the U.S.-Mexico border. Members also engage in other criminal activity such as assault, burglary, homicide, identity theft, and money laundering. According to federal law enforcement officials, prison gangs are criminal organizations that operate within federal and state prison systems as self- perpetuating criminal entities. These gangs also operate outside of prisons typically through the activities of members who have been released from prison into communities. Examples of prison gangs with transnational connections include the following: Barrio Azteca: Barrio Azteca is one of the most violent prison gangs in the United States. The gang is highly structured and has an estimated membership of 2,000. Most members are either Mexican national or Mexican American males. Barrio Azteca is most active in the southwestern United States, primarily in federal, state, and local corrections facilities in Texas and outside of prison in southwestern Texas and southeastern New Mexico. The gang’s main source of income is derived from smuggling illegal drugs from Mexico into the United States for distribution both inside and outside prisons. Gang members often transport illicit drugs across the border for drug trafficking organizations. Gang members are also involved in other crimes including alien smuggling, arson, assault, extortion, kidnapping, and weapons violations. Hermanos de Pistoleros Latinos: This is a Hispanic prison gang formed in Texas in the late 1980s. It operates in most prisons in the state and on the streets in many communities in Texas, particularly Laredo. The gang is also active in several cities in Mexico, and its largest contingent in that country is located in Nuevo Laredo. The gang is structured and is estimated to have 1,000 members. These members maintain close ties to several Mexican drug trafficking organizations and are involved in the trafficking of large quantities of illegal drugs from Mexico into the United States for distribution. Mexican Mafia: This gang was formed in the late 1950s within the California prison system. It is loosely structured and has strict rules that must be followed by the estimated 350 to 400 members. Most of these members are Mexican American males who previously belonged to a Southern California street gang. Mexican Mafia is active in 13 states, but California remains the power base. The gang’s main source of income is extorting drug distributors outside prison and distributing illegal drugs within the prison system and on the streets. Some members have direct links to Mexican drug trafficking organizations. Other criminal activities include controlling gambling and homosexual prostitution in prison. Mexikanemi: The Mexikanemi prison gang was formed in the early 1980s within the Texas prison system. The gang is highly structured and estimated to have 2,000 members, most of whom are Mexican national or Mexican American males who were living in Texas at the time of their incarceration. This gang poses a significant drug trafficking threat to communities in the southwest region of the United States, particularly Texas. Gang members reportedly traffic illegal drugs from Mexico into the United States for distribution inside and outside of prison. Gang members obtain these drugs from Mexican drug trafficking organizations. Surenos: As some individual Hispanic street gang members enter the prison systems, they put aside rivalries with other Hispanic gangs and unite under the name Surenos. The original Mexican Mafia members, most of whom were from Southern California, considered Mexicans from the rural, agricultural areas of Northern California as weak and viewed them with contempt. To distinguish themselves from these northern agricultural workers, members of Mexican Mafia began to refer to the Hispanic gang members that worked for them as Surenos (Southerners). Surenos gang members’ main source of income is retail-level distribution of illegal drugs both within prison systems and in the community, as well as the extortion of drug distributors on the streets. Some members have direct links to Mexican drug trafficking organizations and broker deals for Mexican Mafia as well as their own gang. Other criminal activities of Surenos members include assault, car jacking, home invasion, homicide, and robbery. Texas Syndicate: Texas Syndicate is one of the largest and most violent prison gangs. It is active on both sides of the U.S.-Mexico border and poses a significant drug trafficking threat to communities in the southwest region. The gang is highly structured and is estimated to have 1,300 members, most of whom are Mexican American males between 20 and 40 years of age. Gang members smuggle illegal drugs from Mexico into the United States for distribution inside and outside of prison. Gang members have direct working relationships with drug trafficking organizations. According to federal law enforcement, outlaw motorcycle gangs (OMG) have been in the United States longer than many other gangs and are most numerous in the United States. According to the Department of Justice (DOJ) Gang Unit, there are more than 300 active OMGs in the United States, ranging in size from single chapters with five or six members to hundreds of chapters with thousands of members worldwide. DOJ considers OMGs to be transnational criminal organizations because they typically maintain chapters in more than one country and engage in illicit cross-border activities. OMG chapters are found on every continent. OMGs are highly organized with well-defined hierarchies, defined rules in the form of either bylaws or constitutions, and clear recruitment, acceptance, and promotion processes for members. OMGs have a distinct chain of command, much like a corporation with positions such as president, vice-president, treasurer, sergeant-at-arms, and road captain at the chapter level. Generally, OMG organizational structure consists of individual chapters grouped by geographic region all being headed by a national president. While national and international presidents may exist, each regional chapter is run by its own president. The gang leadership requires loyalty and obedience from members. OMG members may be governed by a code of ethics, a constitution, or a strict set of bylaws. Descriptions of major OMGs present in the U.S. follows—including details on the gangs’ transnational connections. Hells Angels Motorcycle Club: According to U.S. law enforcement officials, the Hells Angels are the largest and the most criminally prominent of the “Big Five” OMGs. The gang includes 2,000 to 2,500 members belonging to over 230 chapters in the United States and 26 foreign countries. In the United States, law enforcement officials estimate that Hells Angels has more than 92 chapters in 27 states with over 800 members. Gang members produce, transport, and distribute marijuana and methamphetamine and transport and distribute cocaine, hashish, heroin, and other drugs. Other crimes perpetrated by Hells Angels members include assault, extortion, homicide, money laundering, and motorcycle theft. Bandidos: U.S. law enforcement authorities consider the Bandidos and Hells Angels to be the two largest OMGs in the United States. This gang has approximately 900 members belonging to over 93 chapters in the United States, with a total of 2,000 to 2,500 members when the membership in 13 other countries is added to U.S. membership. Bandidos is involved in transporting and distributing cocaine and marijuana and producing, transporting, and distributing methamphetamine. The gang is most active in the Pacific, southeast, southwest, and west central regions of the United States and is expanding in these regions by forming new chapters and allowing members of support clubs to form or join Bandidos chapters. Mongols Motorcycle Club: According to law enforcement officials, the Mongols Motorcycle Club is an extremely violent OMG that poses a serious criminal threat to the Pacific and southwest regions of the United States. Mongols members transport and distribute cocaine, marijuana, and methamphetamine and frequently commit violent crimes including assault, intimidation, and murder to defend Mongols territory and uphold its reputation. Most of the club’s 300 members are Hispanic males who live in the Los Angeles area, and many are former street gang members with a long history of using violence to settle grievances. The club also maintains ties to Hispanic street gangs in Los Angeles. In the 1980s, the Mongols OMG seized control of southern California’s OMG activity from the Hells Angels and today is allied with the Bandidos, Outlaws, and Pagan’s OMGs against the Hells Angels. Outlaws: Outlaws has more than 1,700 members belonging to 176 chapters in the United States and 12 foreign countries. U.S. law enforcement officials estimate that Outlaws has more than 86 chapters in 21 states with over 700 members in the United States. Outlaws are the dominant OMG in the Great Lakes Region of the United States. Gang members produce, transport, and distribute methamphetamine and transport and distribute cocaine and marijuana. Other criminal activities engaged in by Outlaws include arson, assault, explosives operations, extortion, fraud, homicide, intimidation, kidnapping, money laundering, prostitution, robbery, theft, and weapons violations. Outlaws compete with the Hells Angels for membership and territory. Vagos Motorcycle Club: The Vagos OMG has hundreds of members in the United States and Mexico and poses a serious criminal threat to those areas in which chapters are located. Law enforcement reports that Vagos has approximately 300 members among 24 chapters in California, Hawaii, Nevada, Oregon, and three chapters in Mexico. Club members produce, transport, and distribute methamphetamine and distribute marijuana. Vagos members also have been implicated in other criminal activities including assault, extortion, insurance fraud, money laundering, murder, vehicle theft, weapons violations, and witness intimidation. Black Pistons: This OMG is the official support club of the Outlaws Motorcycle Club. Established in 2002 with the backing of the Outlaws, Black Pistons has expanded rapidly throughout the United States and into Canada and Europe. The club has an estimated 70 domestic chapters in 20 states and an unknown number of foreign chapters in Belgium, Canada, Germany, Great Britain, Norway, and Poland. The exact number is unknown but is estimated to be more than 200 in the United States. The Outlaws OMG uses Black Pistons chapters as sources of prospective new members. The Outlaws also uses Black Pistons chapters to conduct criminal activity, especially transporting and distributing drugs. Black Piston members also engage in assault, extortion, fraud, intimidation, and theft. Federal agencies have developed and implemented a wide range of programs to combat transnational gangs in the United States that have links to Central America. As shown in table 4, some of these efforts directly combat the gangs while others seek to improve criminal justice systems to increase the ability of Central American countries to apprehend and prosecute criminals including gang members (such as Community Policing and Justice Sector Reform). As also shown in table 4, other efforts are aimed at improving educational and employment opportunities of youth to reduce the motivation for joining gangs (such as Youth Centers). Federal agencies have coordinated their programs to combat transnational gangs through working groups at the interagency level, coordinating groups within Central American countries, and federally led antigang task forces at the local level. For example, at the headquarters level, through the National Gang Intelligence Center (NGIC), representatives of the Department of Justice’s (DOJ) Federal Bureau of Investigation (FBI); Drug Enforcement Administration (DEA); Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF); and Bureau of Prisons (BOP), among other components, and the Department of Homeland Security’s (DHS) U.S. Immigration and Customs Enforcement (ICE) and U.S. Customs and Border Protection (CBP) coordinate and share information on gangs that are a threat to U.S. communities. Table 5 provides additional information about headquarters-level entities’ coordination of antigang efforts. These coordinating entities have different roles and responsibilities, but in general, they serve as mechanisms for deconflicting cases; providing law enforcement agencies with information on gangs and gang activities, including transnational gangs; and coordinating participating agencies’ strategies and task forces. In localities we contacted, law enforcement agencies obtained assistance from the antigang coordinating entities to support their investigations of MS-13 and 18th Street gangs and gang members. For example, in one location we visited, the National Gang Intelligence Center provided an analyst to an FBI antigang task force to help that task force analyze MS-13 gang data for an ongoing investigation. At another location, the FBI antigang task force obtained assistance from the TAG unit in El Salvador to help identify two MS-13 gang members who were in a Salvadoran prison and communicated with MS-13 gang members in the United States to direct the gang members’ criminal activities. Additionally, DOJ’s Gang Unit has assisted with the prosecution of MS-13 and 18th Street gang cases in the United States. For example, in one location we contacted, the Gang Unit assisted the local U.S. Attorney’s Office with the prosecution of a gang case under the Racketeer Influenced and Corrupt Organizations Act. The Gang Unit provided this assistance because, according to the U.S. Attorney’s Office, the office did not have enough experienced attorneys who were familiar with using the Act to prosecute gang members. At the field level, federal law enforcement agencies utilize task forces to coordinate their gang enforcement efforts. For example, the FBI’s Violent Gang Safe Streets Task Forces coordinate with federal, state, and local law enforcement to investigate all active gang threats, but have conducted investigations on MS-13 and 18th Street gangs across the country. In locations we contacted, including Omaha, Nebraska; Los Angeles, California; and Charlotte, North Carolina, FBI task forces investigated MS- 13 gangs and determined that gang members in those locations were communicating with gang members in El Salvador. Similar to the FBI’s Violent Gang Safe Street Task Forces, ATF’s Violent Crime Impact Teams, which partner ATF with federal, state, and local law enforcement agencies to reduce firearms-related violent crime including gang crime, may investigate gangs with transnational connections in specific locations. For example, in 2007 ATF led an investigation in Baltimore, Maryland, involving MS-13 gang members who committed murder and robbery. Through its task force, ATF coordinated this investigation with other agencies including FBI, ICE, the United States Attorneys’ Office, local police agencies, and the Salvadoran national police to investigate and prosecute these gang members. In addition, through Operation Community Shield, ICE works primarily with state and local law enforcement agencies to investigate gangs whose members are foreign- born or in the United States illegally, or both, or have been involved in crimes with a nexus to the U.S. borders. To determine to what extent the U.S. federal government has developed a strategy to combat transnational gangs with connections to Central America, we examined the interagency strategy, called the Strategy to Combat the Threat of Criminal Gangs from Central America and Mexico (the Strategy), and compared the contents of the Strategy to select criteria in our prior work on desirable characteristics of an effective national strategy, including (1) clear purpose, scope, and methodology; (2) discussion of problems, risks, and threats; (3) desired goals, objectives, activities, and performance measures; and (4) delineation of roles and responsibilities. In regard to the National Security Council (NSC), we discussed its role in developing and implementing the Strategy with a mix of the departments and agencies that participated in the NSC’s International Organized Crime Policy Coordinating Committee (IOCPCC) such as the Department of State (State), the Department of Justice (DOJ), and the U.S. Agency for International Development (USAID). We also examined the roles and activities of the various federal agencies under the Strategy including DOJ, the Department of Homeland Security (DHS), State, the Department of Defense (DOD), the Department of the Treasury (Treasury), and USAID and their component agencies. To do this we reviewed antigang program documents and interviewed officials from DOJ, DHS, State, DOD, Treasury, and USAID in headquarters as well as component agencies such as the Federal Bureau of Investigation (FBI) within DOJ and U.S. Immigration and Customs Enforcement (ICE) within DHS to obtain their views about the framework, including the different categories, of the Strategy. To determine how U.S. federal agencies have implemented programs to carry out the Strategy and combat transnational gangs, coordinated these programs, and assessed their results, we examined a mix of DOJ’s, DHS’s, State’s, USAID’s, and their component agencies’ plans, performance data, reports, and assessments for fiscal years 2006 through 2009. We compared federal agencies’ efforts to coordinate and share information on their transnational antigang programs to criteria in our prior work on effective interagency collaboration and results-oriented government. To assess the reliability of statistical information we obtained, such as data on program performance and outcomes, we discussed the sources of the data with agency officials and reviewed documentation regarding the compilation of data. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we examined federal agencies’ funding allocated to transnational antigang programs. In particular, we reviewed agencies’ budget requests for fiscal years 2008 through 2010; appropriations acts for DOJ, DHS, State, and USAID for those fiscal years; and expenditure and other plans for the Mérida Initiative. To obtain information on federal efforts to combat the gangs as well as the extent to which the agencies coordinated their efforts with other agencies, we interviewed a mix of federal, state, and local law enforcement officials in seven U.S. locations—Baltimore, Maryland; Charlotte, North Carolina; Laredo and McAllen, Texas; Los Angeles, California; Nashville, Tennessee; and Omaha, Nebraska—and U.S. federal, foreign, and three nongovernmental agencies’ officials in El Salvador and Guatemala. To understand the process of how ICE handles and removes gang members who are in the United States illegally, we visited ICE’s South Texas Detention Facility in Pearsall, Texas, and interviewed officials with ICE’s Office of Detention and Removal Operations. We selected the U.S. locations and El Salvador and Guatemala based on a mix of criteria that included locations (1) along the U.S. borders, (2) where U.S. federal agencies have implemented antigang programs, and (3) where federal law enforcement agencies have conducted operations involving gangs with connections to Central America. For our site visits to foreign locations, we consulted with officials of federal agencies to identify in which foreign locations their agencies had efforts underway. Of the countries in the region, agency officials suggested we visit El Salvador and Guatemala because more antigang initiatives were underway and further along as compared to efforts in other countries. Given this, we selected these countries to obtain more information on these efforts and evaluate the effect they have had on the gang problem. We selected the South Texas Detention Facility because it was the facility identified by ICE officials for handling the most gang members awaiting removal. More specifically, in the U.S. locations we interviewed officials from some of the following federal agencies: the FBI; Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF); Drug Enforcement Administration (DEA); U.S. Attorneys’ Office; ICE; and in Laredo, Texas, U.S. Customs and Border Protection (CBP). We also interviewed a mix of state and local law enforcement officials in Charlotte, North Carolina; Los Angeles, California; Nashville, Tennessee; and Omaha, Nebraska. In Guatemala and El Salvador, we interviewed a mix of officials from the following federal agencies: DOJ’s Office of International Affairs, FBI, DEA; ICE; State; USAID; the State-sponsored International Law Enforcement Academy; and DOD. We also interviewed a mix of officials from Salvadoran and Guatemalan government agencies, including the countries’ national police and the Salvadoran prosecutors’ office. We also observed some of the activities related to USAID-sponsored prevention efforts in El Salvador and Guatemala such as the youth centers and interviewed participants, local government officials involved in the efforts, and members of the community about their views and the effect of the programs. Additionally, we interviewed officials from intergovernmental and nongovernmental organizations such as the Organization of American States, Washington Office on Latin America, the Central American Coalition for Prevention of Youth Violence, the Centro de Formacion y Orientacion, and the Instituto Universitario de Opinion Publica to obtain their perspectives on the gang problem in Central America. We also interviewed officials from contractors and companies such as Creative Associates International, Inc., and Pepsi Co. that have partnered with USAID or participated in USAID efforts to establish gang prevention programs and provide employment opportunities for ex-gang members in countries such as El Salvador and Guatemala. The information we obtained from interviewing officials in the U.S. and Central American locations cannot be generalized across all locations in the United States or Central America. However, because we selected these locations based on a variety of factors, they provided us with an overview of the agencies’ antigang programs, examples of coordination and measurements to assess results, and any challenges with implementation of the programs. We conducted this performance audit from April 2008 through April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Rebecca Gambler, Assistant Director; Heather Dowey; Sally Gilley; Mike Harmond; Chris Hatscher; Michael Lenington; Amanda Miller; Janet Temko; and Adam Vogt made significant contributions to this report.
Thousands of gang members in the United States belong to gangs such as MS-13 and 18th Street that are also active in Central American countries. Federal entities with responsibilities for addressing Central American gangs include the National Security Council (NSC); the Departments of Homeland Security (DHS), Justice (DOJ), and State; and the U.S. Agency for International Development (USAID). GAO was asked to review federal efforts to combat transnational gangs. This report addresses (1) the extent to which the federal government has developed a strategy to combat these gangs, and (2) how federal agencies have implemented the strategy and other programs to combat these gangs, coordinated their actions, and assessed their results. GAO examined federal agencies' antigang plans, resources, and measures; interviewed federal, state, and local officials in seven localities representing varying population sizes and geographic regions; and interviewed U.S. and foreign officials in El Salvador and Guatemala where U.S. agencies have implemented antigang programs. The results of these interviews are not generalizable. The NSC, in conjunction with State, DOJ, DHS, and USAID, developed a strategy to combat gangs with connections to Central America; however, the strategy lacks an approach or framework to oversee implementation and performance goals and measures to assess progress. GAO previously reported that characteristics such as defining the problem to be addressed as well as the scope and methodology of the strategy; describing agencies' activities, roles, and responsibilities; providing an approach to oversee implementation; and establishing performance measures, among other characteristics, can enhance a strategy's effectiveness. While the antigang strategy contains some of these characteristics, such as identifying the problems and risks associated with the gangs, describing the scope and purpose of the strategy, and defining roles and responsibilities of federal agencies as well as specific implementation activities, it lacks other characteristics such as an approach for overseeing implementation and goals and measures for assessing progress. For example, although agencies coordinate the strategy's implementation through an interagency task force, agency officials reported that this task force does not oversee the strategy's implementation and that no entity exercises oversight responsibility for the strategy's implementation. Similarly, while State and USAID are developing measures to assess the outcomes of their antigang programs, these measures do not encompass all programs under the strategy or track results of the strategy as a whole. Incorporating these characteristics could enhance the accountability of agencies to implement the strategy and provide a means for assessing progress. To carry out the strategy and combat transnational gangs, federal agencies have implemented programs and taken steps to coordinate their actions and develop performance measures to assess results of individual programs; but, coordination could be strengthened in an antigang unit in El Salvador by reaching agreement on Immigration and Customs Enforcement's (ICE) role in the unit, the only such unit currently in Central America. Agencies use various interagency groups to coordinate with each other, such as DOJ's Anti-gang Coordination Committee. However, improved coordination at the FBI-initiated antigang unit in El Salvador could enhance information sharing. While the FBI requests information directly from Salvadoran police, ICE requests go to its country attache, then to FBI agents at the unit who pass it on to Salvadoran police, as ICE does not have an agent at the unit. Prior GAO work has shown that agencies should facilitate information sharing and look for opportunities to leverage resources. Although FBI and ICE officials agree that the process could be improved by posting an ICE agent at the unit and have been discussing the possibility since 2008, they have not yet reached agreement on ICE's role. By reaching agreement, the FBI and ICE could strengthen coordination and information sharing. While agencies have established measures to assess programs, as some of the programs are just starting, data collection for many measures is in the early stages.
In the United States, the national inventory of commercial spent nuclear fuel amounts to nearly 70,000 metric tons, which is stored at 75 sites in 33 states (see fig. 1). Fuel for commercial nuclear power reactors is typically made from low- enriched uranium fashioned into thumbnail-size ceramic pellets of uranium dioxide. These pellets are fitted into 12- to 15-foot hollow rods, referred to as cladding, made of a zirconium alloy. The rods are then bound together into a larger assembly. A typical reactor holds about 100 metric tons of fuel when operating—generally from 200 to 800 fuel assemblies. The uranium in the assemblies undergoes fission—a process of splitting atoms into fragments and neutrons that then bombard other atoms—resulting in a sustainable chain reaction that creates an enormous amount of heat and radioactivity. The heat is used to generate steam for a turbine, which generates electricity. The fragments created when fission splits atoms, or when bombarding neutrons bond with atoms, include hundreds of radioisotopes, or radioactive substances, such as krypton-90, cesium-137, and strontium-90. Furthermore, the neutron bombardment of uranium can also create heavier radioisotopes, such as plutonium-239. The radioisotopes produced in a reactor can remain hazardous from a few days to many thousands of years; these radioisotopes remain in the fuel assemblies and as components of the resulting spent fuel. Each fuel assembly is typically used in the reactor for 4 to 6 years, after which most of the fuel it contains is spent, and the uranium dioxide is no longer cost-efficient at producing energy. Reactor operators typically discharge about one-third of the fuel assemblies every 18 months to 2 years and place this spent fuel in a pool to cool. Water circulates in the pool to remove the enormous heat generated from the radioactive decay of some of the radioisotopes. As long as circulating water continues to remove this heat, pool water temperature is maintained well below boiling, typically below 120 degrees Fahrenheit. If exposed to air, however, recently discharged spent fuel could rise in temperature by hundreds or thousands degrees Fahrenheit. A pool is needed to ensure that heat generated from the decay of radioisotopes, particularly immediately after discharge from a reactor, does not damage fuel rods and release radioactive material. Figure 2 shows a fuel pellet for a commercial nuclear reactor and a fuel rod in an assembly. The pools of water are typically about 40 feet deep, with at least 20 feet of water covering the spent fuel, and the water is cooled and circulated to keep the assemblies from overheating. These pools are constructed according to NRC’s requirements, typically 4- to 6-feet thick with steel- reinforced concrete and a steel liner. The pools must be located inside what is known as the vital area of a nuclear power reactor, protected by armed guards, physical barriers, and limited access. Within the vital area, pools may be in one of two locations, depending on the type of reactor. In a pressurized water reactor, spent fuel is stored in a pool at or below ground level, but in a typical boiling water reactor, spent fuel is stored in a pool well above ground level, near the reactor vessel, as high as three stories above ground. Figure 3 shows the location of a spent fuel pool for a boiling water reactor, and figure 4 shows a typical spent fuel pool. As part of the construction permit and operating license application process for nuclear reactors, NRC requires companies licensed to operate these reactors to assess natural hazards, such as earthquakes, floods, hurricanes, and tidal waves that their reactors might face. Reactor operators must also show that their proposed pool designs would survive the most severe natural hazards, or combinations of less severe hazards, expected for that particular area.NRC has required reactor operators to reevaluate their original design criteria against more recent seismic information that has been developed since many of the nuclear power plants were first licensed. According to NRC documents, NRC developed its requirements with a concept of “defense-in-depth,” which is a way of designing and operating nuclear Since the Fukushima Daiichi disaster, power reactors that focuses on creating multiple independent and redundant layers of defense to compensate for potential human and mechanical failures so that no single layer, no matter how robust, is exclusively relied upon. All radioactive substances—referred to as radioisotopes—are unstable and spontaneously transform themselves into more stable isotopes by capturing or emitting atomic particles or by fission. The time it takes a radioisotope to decay into more stable substances is measured by a half-life. A half-life is the length of time it takes for one-half of a particular radioisotope to decay into a new isotope. After two half-lives, one-quarter of the original radioisotope will be left, but three-quarters will have changed to the new isotope. After 10 half-lives, only 1/1,000 of the original radioisotope is left. nuclear emergency.years and will take over 300 years to decay to negligible amounts. Cesium-137 contributes to the decay heat in a spent fuel pool and is a significant land contaminant if released. ) In contrast, cesium-137 has a half-life of 30.2 Typically, according to NRC officials, spent fuel must remain in a pool for at least 5 years to decay enough to remain within the heat limits of currently licensed dry cask storage systems. Spent fuel cools very rapidly for the first 5 years, after which the rate of cooling slows significantly. Spent fuel can be sufficiently cool to load into dry casks earlier than 5 years, but doing so is generally not practical. Some casks may not accommodate a full load of spent fuel because of the greater heat load. That is, the total decay heat in these casks needs to be limited to prevent the fuel cladding from becoming brittle and failing, which could affect the alternatives available to manage spent fuel in the future, such as retrieval. In recent years, reactor operators have moved to a slightly more enriched fuel, which can burn longer in the reactor. Referred to as high-burn-up fuel, this spent fuel may be hotter and more radioactive coming out of a reactor than conventional fuel and may have to remain in a pool for as long as 7 years to cool sufficiently. In the original designs submitted for spent fuel pools, fuel assemblies were packed in relatively low densities, but operators have replaced these low-density racks with higher-density racks to store more spent fuel. According to NRC officials, NRC accepts high-density storage of spent fuel if certain conditions are met, such as adequate cooling, the maintenance of structural integrity, and the prevention of a critical chain reaction. Neutron-absorbing materials can be used to keep closely packed assemblies from starting a chain reaction.in the 1980s, NRC conducted several safety studies on the impact of increasing the density of spent fuel in pools and determined that the risk of a potential release from overheating or igniting, or even of a critical chain reaction from the dense geometric configuration, was small, particularly if certain steps were taken to reduce the risk. Even with re- As pools began to fill racking to a dense configuration, however, spent nuclear fuel pools are reaching their capacities and may contain several thousand assemblies each. As reactor operators have run out of space in their spent fuel pools, more operators have turned to dry cask storage systems. These systems consist of a steel canister protected by an outer cask made of steel or steel and concrete to provide shielding from the heat and radiation of spent fuel. In one typical process of transferring spent fuel to dry storage, reactor operators place a steel canister inside a larger steel transfer cask and lower both into a pool. Spent fuel is loaded into the canister, a lid is placed on the canister, and then both the canister and transfer cask are removed from the pool. The lid is welded onto the canister, and the water drained. Then the canister and transfer cask are aligned with a storage cask and the canister is maneuvered into the storage cask. The storage casks, in either vertical or horizontal designs, are usually situated on a large concrete pad surrounded by safety systems and a security infrastructure, such as radiation detection devices and intrusion detection systems. The transfer process has become routine at some power plants (see fig. 5). In addition to regulating the construction and operation of commercial nuclear power plants, NRC also regulates spent fuel in dry storage. NRC requires that spent fuel in dry storage be stored in approved systems that offer protection from significant amounts of radiation. NRC evaluates the design of passively air-cooled dry storage systems for resistance to certain natural disasters, such as floods, earthquakes, tornado missiles, and temperature extremes. NRC may require physical tests of the systems, or it may accept information derived from scaled physical tests and computer modeling. For example, dry storage systems must be able to withstand, among other things, being dropped from the height to which it would be lifted during operations; being tipped over by seismic activity, weather, or other forces or accidents; fires; and floods. NRC has also analyzed the performance of dry storage systems in different terrorist attack scenarios. Once a dry storage system is approved, NRC issues a certificate of compliance for a cask design. Currently, NRC may issue a cask certificate for a term not to exceed 40 years. Similarly, NRC may renew a cask certificate for a term not to exceed 40 years (see fig. 6). The length of time that spent fuel can safely be stored in dry casks is uncertain. We earlier reported that experts agree that spent fuel can be safely stored for up to about 100 years, assuming regular monitoring and maintenance.associated rule stating that spent fuel can be safely stored for up to 60 In December 2010, NRC issued a determination and years beyond the licensed life of the reactor in a combination of wet and dry storage. Four states, an Indian community, and environmental groups petitioned for review of NRC’s rule, however, arguing in part that NRC violated the National Environmental Policy Act by failing to prepare an environmental impact statement in connection with the determination. On June 8, 2012, the U.S. Court of Appeals for the District of Columbia Circuit held that the rulemaking did require either an environmental impact statement or a finding of no significant environmental impact and remanded the determination and rule back to NRC for further analysis. NRC has not yet indicated what actions it will take in response to the court’s action. On August 7, 2012, the commissioners voted not to issue final licenses dependent on the determination and rule until it addresses the court’s remand, however, the commission is currently preparing an environmental impact statement on the effects of storing spent fuel for 200 years. In addition, NRC, DOE, and industry are conducting a series of studies to evaluate the regulatory actions or additional engineering measures needed for long-term storage of spent fuel to account for possible degradation of the canisters or the spent fuel in the canisters. Since the 1950s, even before operation of the first commercially licensed nuclear power reactor in the United States, the federal government recognized the need to manage the back end of the fuel cycle—spent nuclear fuel removed from a reactor. A 1957 National Academy of Sciences report endorsed deep geological formations to isolate high-level radioactive waste, which includes spent nuclear fuel, but during the 1950s and 1960s, nuclear waste management received relatively little attention from policymakers. The early regulators and developers of nuclear power viewed waste disposal primarily as a technical problem that could be solved when necessary by applying existing technology. Attempts were made to reprocess the spent nuclear fuel—that is, to reuse some useful elements remaining in a spent fuel assembly after it is discharged from a reactor, such as unfissioned uranium-235—but this process was not pursued because of economic issues and concerns that reprocessed nuclear materials raise proliferation risks. As noted above, the Nuclear Waste Policy Act of 1982 charged DOE with investigating sites for a federal geologic repository and authorized DOE to contract with reactor operators to take custody of spent fuel for disposal at the repository beginning in 1998. In 1987, Congress amended the Nuclear Waste Policy Act to direct DOE to focus its efforts only on Yucca Mountain for a repository. DOE did not submit a license application for Yucca Mountain until 2008, however—10 years after it was supposed to start taking custody of spent fuel. In 2009, DOE announced that it planned to terminate its work related to the Yucca Mountain repository, and in 2010 it filed a motion to withdraw the license application. NRC’s licensing board denied the motion, but DOE continued to take steps to dismantle the repository project. In September 2011, the NRC commissioners considered whether to overturn or uphold the licensing board’s decision, but they were evenly divided and unable to take final action on the matter. Instead, the NRC commissioners directed the licensing board to suspend work by September 30, 2011. NRC’s failure to consider the application, among other things, is being contested in federal court. Several parties have filed a petition against NRC asking the federal court to, among other things, compel NRC to provide a proposed schedule with milestones and a date for approving or disapproving the license application. Currently, it remains uncertain whether NRC will have to resume its license review efforts and whether a repository at Yucca Mountain will be built. In the interim, in 2010, the administration directed DOE to establish a Blue Ribbon Commission of experts to study an array of nuclear waste management alternatives. DOE established the commission, which studied alternatives including options for interim storage of spent fuel and permanent disposal. In its January 2012 report, the commission recommended that the nation adopt centralized storage of some spent fuel as an interim measure but, at the same time, develop a process to find and license a site for a permanent repository. With nowhere to send the spent fuel, operators must keep it on-site at decommissioned and operating commercial reactors until some option to move it off-site becomes available. Countries other than the United States also produce electricity from nuclear power reactors and have programs to manage their spent nuclear fuel. Some countries, such as France, store their spent fuel in pools until it can be reprocessed, and other countries, such as Canada, use both wet and dry storage systems. Following the accident at Fukushima, Japan temporarily shut down its nuclear reactors, but it has restarted one and may restart others. Several countries have programs to develop permanent disposal facilities. See appendix II for more information on other countries’ programs. The amount of spent fuel accumulating at commercial reactor sites is expected to increase by about 2,000 metric tons each year until it can begin to be shipped off-site and, even then, shipping it off-site will be a decades-long process. By then, currently operating reactors will begin to retire, dismantling their spent fuel pools and leaving the spent fuel stranded in dry storage canisters with limited options for repackaging them, should repackaging be required to replace degraded canisters, or to meet transportation or disposal requirements. The amount of spent fuel is expected to more than double to about 140,000 metric tons by 2055, when the last of currently operating reactors is expected to retire, according to the Nuclear Energy Institute, but it may take at least that long to ship the spent fuel off-site. This amount is based on the assumption that the nation’s current reactors continue to produce spent nuclear fuel at the same rate—about 2,000 additional metric tons annually; that no new reactors are brought online; and that some decline in the generation of spent fuel takes place as reactors are retired. At the end of 2012, over 69,000 metric tons is expected to accumulate at 75 sites in 33 states, enough to fill a football field about 17 meters deep.Without central storage options or an available permanent disposal facility, spent fuel continues to accumulate at the sites where it was generated. Current industry practice has been to store the spent fuel in the pools, with an industry expectation that, at some point, DOE would begin to take custody of it. In 2011, about 74 percent of commercial spent fuel was stored in pools, and the remaining 26 percent was in dry storage, but these proportions will slowly change as more pools fill and the spent fuel is transferred to dry storage. According to the Nuclear Energy Institute, by 2025, assuming no new reactors, the proportion of spent fuel in wet storage and dry storage should be roughly equal, about 50,000 metric tons in each. Shortly after 2055, when the last currently operating reactors’ licenses are expected to expire, and the reactors are expected to retire, virtually all the spent fuel arising from the current fleet will have been moved to dry storage. Figure 7 shows the trend of accumulated spent fuel and the rate of spent fuel transferred from wet storage to dry storage through 2067, according to our analysis of Nuclear Energy Institute data. When it became evident that DOE was likely decades behind its deadline to pick up spent fuel, nuclear power plant operators began transferring spent fuel to dry storage to retain enough space in their pools to safely discharge fuel from their reactors. The rate of transfer differs by the operating and spent fuel characteristics of the reactor—that is, reactor type and size—as well as the size of the spent fuel pool. In general, reactor operators must transfer an average of three to six canisters each year to keep pace with the discharge of spent fuel from their reactors. Table 1 provides data on reactors and spent fuel and the rate of transfer anticipated to dry storage. Reactor operators continue to fill their spent fuel pools until capacity is reached, in part because the transfer of spent fuel to dry storage is costly and time-consuming. Specifically, operators must take extensive steps to ensure that safety precautions to protect workers and the public are met. Before an operator can transfer a single fuel assembly to dry storage, the operator must train personnel and practice the procedure. According to industry representatives, these efforts involve several weeks of mobilization and demobilization of equipment before and after the transfer. The transfer of spent fuel to a single canister typically takes at least 1 week. The amount of spent fuel that accumulates and is stored on-site will also be affected by the timing of an off-site central storage or permanent disposal facility, if and when one becomes available. To estimate the amount of accumulation at commercial nuclear power plants before an off-site facility becomes available, we considered three scenarios: (1) Yucca Mountain as a permanent disposal facility, (2) two federally funded centralized storage facilities, and (3) an alternative permanent disposal facility. For purposes of our analysis, we assumed that each storage facility would be licensed by NRC and funded by Congress. Furthermore, for each scenario, we recognized that multiple factors could affect the projected time frame. These factors include the siting, licensing, and construction, and the start of operations of the storage or disposal facility, as well as the time needed to ship spent fuel to the off-site facility and reduce the backlog of already-accumulated spent fuel. For each scenario, we made certain assumptions and incorporated them into our analyses. We estimated the earliest likely dates that Yucca Mountain, two federal centralized storage facilities, or a permanent repository could be opened. Our analysis was based on information from our prior work in analyzing alternatives to a repository at Yucca Mountain, including expert input to develop assumptions to model the time frames for different scenarios for spent fuel management.on our methodology for this analysis. See appendix I for more details Our analysis showed that regardless of which storage or disposal scenario was considered, it would take at least 15 years to open an off-site location and decades to ship the spent fuel once the central storage or disposal facility became available. The time needed for shipment depends on the amount of fuel accumulated and assumes a shipment rate of 3,000 metric tons per year—the rate that DOE developed as part of its plans for Yucca Mountain. Experts we consulted in our prior work agreed this rate was reasonable. A faster or slower shipping rate could affect the rate of continued accumulation or drawdown of the backlog. When we conducted our analysis in 2009, we reported that Yucca Mountain—the first scenario—was likely to offer the earliest option for off-site disposal, in 2020. Since then, the process for licensing Yucca Mountain has stopped, and it is unclear whether the licensing process will be resumed; in addition, many key workers who worked on Yucca Mountain have left DOE for other employment or retirement. If the licensing process for Yucca Mountain were resumed in 2012, we estimate that DOE would require roughly at least 15 more years to open the site as a repository, or sometime around 2027. We estimate that the second scenario—for the federal government to site, license, construct, and open two centralized storage facilities—might take about 20 years, with completion in 2032, because of the complexities in siting, licensing, and constructing such facilities. We estimate that the third scenario—for a potential permanent disposal facility as an alternative to the Yucca Mountain repository—would take the longest to be realized, about 40 years, or 2052, because of the additional scientific analysis required to ascertain the safety of a permanent disposal facility. Figure 8 shows the amount of spent fuel that is expected to accumulate in each state for the years 2012; 2027 (the earliest likely opening date if the Yucca Mountain repository were to be licensed and constructed); 2032 (the earliest a centralized storage facility could be expected to open); 2052 (the earliest a permanent disposal facility other than Yucca Mountain could be expected to open); and 2067, when all currently operating commercial nuclear power reactors are expected to have retired and transferred their spent fuel to dry storage. Resolving the issue of what to do with commercial spent nuclear fuel will likely be a decades-long, costly, and complex endeavor. Planning ahead to allow reactor operators and local communities to make better-informed and forward-looking decisions is important in such a complex undertaking. For example, DOE had earlier created designs for a specific type of canister for disposal at the Yucca Mountain repository, and had informed reactor operators that all spent fuel destined for Yucca Mountain needed to be packaged in this specific canister, called a transportation, aging, and disposal canister. Although the canister had not gone into commercial production, its design specifications had at least informed reactor operators. Now that both DOE and NRC have suspended their licensing efforts for the Yucca Mountain repository, a great deal of uncertainty exists about future spent fuel management. Given this uncertainty, it may be difficult for reactor operators to make decisions about issues such as the rate of transferring spent fuel to dry storage and the type of canister to be used for disposal. During the decades it will take to open a storage or disposal facility, many reactors will be retiring from service, “stranding” their accumulated spent fuel in a variety of different dry storage systems, with no easy way of repackaging them should repackaging be required to meet storage or disposal requirements. Most U.S. reactors were built during the 1960s and 1970s and, after a 40-year licensing period with a possible 20-year extension, will begin retiring in large numbers by about 2030 and emptying their pools by about 2040. NRC regulations require radioactive contamination to be reduced at a reactor to a level that allows NRC to terminate the reactor license and release the property for other use after a reactor shuts down permanently. This cleanup process—known as decommissioning—costs hundreds of millions of dollars per reactor, and NRC is responsible for ensuring that operators provide reasonable assurance that they will have adequate funds to decommission their reactors. Once a spent fuel pool is removed, reactor operators will have limited options for managing spent fuel. For example, if reactor operators need to repackage their spent fuel because a canister has degraded or because other transportation or disposal requirements must be met, they will have to build a new spent fuel pool or some other dry transfer facility, or they will need to ship their spent fuel to another site with a wet or dry transfer facility. As of January 2012, the United States had nine decommissioned commercial nuclear power plant sites. Seven of these plants have completely removed spent fuel from their pools—a total of 1,748 metric tons—as well as all infrastructure except that needed to safeguard the spent fuel. The other two sites, which have a total of 5,103 metric tons of spent fuel in both wet and dry storage, are in the process of emptying their pools and transferring all their spent fuel to dry storage. Assuming that no centralized storage or permanent disposal facility becomes available, our analysis indicates that by 2040, the amount of stranded spent fuel in closed commercial nuclear power plants will total an estimated 3,894 metric tons; by 2045, that amount could increase to 28,751 metric tons; and by 2050, the amount could be 62,237 metric tons. By 2067, nearly all of the 140,000 metric tons of spent fuel could be stranded in dry storage. Figure 9 shows the expected pattern of growth for total accumulated spent fuel compared with that of spent fuel from decommissioned reactors, or stranded spent fuel. According to several studies on spent fuel storage, the key risk of storing spent fuel at reactor sites is radiation exposure from spent fuel that has caught fire when it is stored in a pool, but it is difficult to quantify the probability of such an event. Nuclear reactor operators have put into place several efforts to mitigate the effects of such a fire, although disagreement exists on the mitigation needed. In contrast to pool storage, spent fuel in dry storage is less susceptible to severe radiological releases. Furthermore, NRC has no centralized database to help identify, locate, and access classified studies on spent fuel. Radiation exposure—from a minor dose resulting from a work-related accident to a severe, widespread release of radiation from a spent fuel fire—is the key concern about the hazard of storing spent nuclear fuel. According to studies we reviewed and NRC officials and representatives of other groups we spoke with, the worst-case scenario for spent fuel at reactor sites is the possibility of a self-sustaining fire in a spent fuel pool, which could engulf all assemblies in the pool, with significant consequences. According to the analysis in a February 2001 NRC study, assuming a high release of radiation, the release of spent fuel fission products resulting from a pool fire could result in nearly 200 early fatalities, thousands of subsequent cancer fatalities, and widespread land contamination. These early fatalities could be reduced or eliminated, according to the study, if the radiation release was less severe and if there were an early evacuation of the affected population. NRC officials told us that the assumptions used in that study were very conservative and that they believed that a lower release of radiation and an early evacuation are more representative of potential scenarios involving operating nuclear power reactors. A 2006 National Academy of Sciences study also found that a spent fuel fire could release large quantities of radioactive materials into the environment and cause widespread contamination. NRC officials, as well as studies by Sandia National Laboratories (commissioned by NRC) and the National Academy of Sciences (2006), informed us about the conditions that could lead to a fire. Such a fire could occur only if enough water in the spent fuel pool were lost, such as through drainage or boiling away, exposing roughly the top half of the fuel assemblies. Without sufficient water to keep spent fuel covered and cool, it is possible that some of the hotter assemblies—those most recently discharged from a reactor—could ignite. Furthermore, once started, a fire in a spent fuel pool would be very difficult to extinguish because, in such a case, the zirconium alloy making up the metal cladding surrounding the assemblies would react with oxygen and, when a certain temperature was reached, would begin a chemical reaction that releases energy and raises the temperature. Essentially, the fire becomes hotter and self-sustaining and, depending upon the density of spent fuel in the pool, could spread to other assemblies. On the basis of studies cited by NRC officials and a Sandia National Laboratories study, a fire in a fully drained pool can start at about 1,830 degrees Fahrenheit (about 1,000 degrees Celsius). A zirconium fire does not involve flames; rather, it burns like a welding torch. A zirconium fire can start only if a complex series of conditions occurs. NRC and other studies indicate that such a fire is not likely. Furthermore, the physical protection features and mitigation measures at nuclear power reactors make the probability of a fire in a spent fuel pool very low. First, there must be an initiating event, such as an earthquake more severe than the pool was designed to withstand, an accidental drop of a cask during dry cask loading operations, or a terrorist attack. Second, the initiating event must result in a critical loss of water, such as through a breach in the pool wall or floor that would allow water to drain out. Third, the reactor operator must be unable to respond adequately to a water loss, such as being unable to replenish lost pool water sufficiently to cool the assemblies. Whether a self-sustaining fire starts and spreads depends on additional variables, according to Sandia National Laboratories studies commissioned by NRC from 2003 through 2006 to assess the effects of some of these variables for pool fires. Two important variables are: The age and the heat of the spent fuel. Spent fuel is hottest when first discharged from a reactor but cools relatively quickly. The risk of a zirconium fire is much greater with recently discharged fuel than with older fuel. The size of a hole in the pool and subsequent rate of water drainage. A Sandia National Laboratories study analyzed the effects of differently sized holes for various fuel assembly configurations, fuel ages, ventilation assumptions, and replacement water scenarios, and this analysis showed that larger holes and drainage rates, all other factors being equal, resulted in higher temperatures of the fuel assemblies. NRC officials told us that, from a regulatory perspective, the risks of an event causing a large release of radiation that endangers public safety from spent fuel in either wet or dry storage are low enough to be within acceptable limits of risk. NRC officials also said the agency considers risk to be the probability of an event occurring multiplied by the consequences of that event and has determined that a spent fuel fire is a low-probability, high-consequence event. In 2001, an NRC study estimated the frequency of having spent fuel pool assemblies uncovered and exposed to the air to be, on average, an event that occurs once every 420,000 years. NRC officials told us the agency did not update its quantitative likelihood estimates after the September 11, 2001, terrorist attacks. Since Fukushima Daiichi, NRC has been engaged in ongoing initiatives related to items such as addressing a loss of off-site electricity and seismic hazard reevaluation. It has been conducting a study on the consequences of accident scenarios affecting spent fuel pools and is undertaking a probabilistic risk assessment to quantify spent fuel risk for a selected reactor site of interest. Independent studies we reviewed indicate the difficulty of quantifying the level of risk of stored spent fuel. Examples of these studies follow: The Institute for Resource and Security Studies, a Massachusetts- based technical and policy research group, reported in 2009 that the methodology needed to estimate the probability of nuclear accidents is complex, requiring consideration of internal and external initiating events, analyses involving uncertainty, peer review, and estimates of radiological consequences. The National Academy of Sciences stated in a 2006 study that the probability of a terrorist attack on spent fuel storage cannot be assessed quantitatively or comparatively and that it is not possible to predict the behavior and motivations of terrorists. This study noted, and a National Academy of Sciences official expressed concern, that in the NRC-sponsored studies available when the National Academy of Sciences was performing its work, NRC did not examine some low- probability scenarios that could result in severe consequences and that, although unlikely, should be protected against. Efforts to mitigate safety and security risks could reduce the effects of key factors in the dynamics of a potential fire in a spent fuel pool, according to our analysis of Sandia National Laboratories studies on pool fire scenarios. Still, disagreement exists—largely between community action groups and NRC—as to the appropriate density of assemblies in a spent fuel pool. Storage configurations that disperse the hottest spent fuel assemblies are among the most important mitigation efforts that Sandia National Laboratories has identified. NRC and community action groups differ, however, on the extent to which these efforts should be employed. In 2011, Sandia National Laboratories reported on its study of the safety and security benefits presented by five different fuel configurations in a storage pool. According to this study, it is preferable to employ configurations that place the more recently discharged, hotter assemblies away from each other—the farther the better—and intersperse them with older, cooler assemblies or, preferably, with empty adjacent cells. NRC has provided regulatory guidance to reactor sites to take advantage of these safer configurations. Representatives from community action groups we interviewed said that even with NRC’s mitigation efforts, spent fuel pools remain too densely packed and that the total amount of spent fuel in the pools should be reduced by accelerating the transfer of spent fuel into dry storage. In addition, a 2003 study led by a scholar at a community action group proposed open rack storage for spent fuel pools. Under this proposal, 20 percent of the pool assemblies would be transferred to dry storage, which would then allow an open channel on each side of the pool. This configuration would help promote air convection between the assemblies and, in turn, reduce the probability of an ignition and subsequent spread to other assemblies. The fewer assemblies that catch fire, the smaller the amount of potential radiation that could be released into the atmosphere.environmental groups collaborated to develop a set of principles for safeguarding spent fuel. They advocated spent fuel storage policies, including an open-frame, low-density layout for spent fuel pools and transfer of this fuel to dry storage within 5 years after its removal from a reactor. According to NRC, a state regional organization, and representatives from industry and community action groups, there are trade-offs between the benefits versus the costs and risks of moving spent fuel. Nonetheless, no clear agreement exists—according to Sandia National Laboratories’ analysis and input from community action groups— on the extent to which the density of spent fuel in pools should be reduced. Furthermore, in 2006, over 150 community action and NRC requires nuclear reactor sites to develop and implement strategies to maintain or restore cooling of reactor cores, containment, and cooling capabilities for spent fuel pools under circumstances due to explosions or fire—a requirement that includes providing sufficient, portable, and on-site cooling equipment. A Sandia National Laboratories study determined that when holes in pool structure cause significant water drainage, reactor operators would generally have from a few hours to a few days to replace lost water or cool spent fuel with sprays in an effort to prevent a fire. If no water drained, such as in a loss-of-power event that caused a loss of cooling and allowed the pool water to boil, reactor operators might have days or weeks. NRC officials said that as spent fuel is uncovered, sprays are efficient and effective in cooling fuel assemblies. They also told us that trade-offs exist between installed and portable spray systems. Installed spray systems can be operated remotely but are susceptible to damage during an event. Portable systems provide adequate spray and are stored at least 100 yards away from the pool in secure places, but in case of an event, reactor operators may not always have access to the pool area to use them because of radiation hazard or physical obstruction. According to a member of a community action group we interviewed, replacement water and sprays may be effective in cooling spent fuel, but replacement water may not contain boron, which is needed to absorb neutrons and prevent a critical chain reaction. This member told us that there is no requirement for reactor operators to keep a supply of boron to add to replacement water. According to NRC officials, only operators of pressurized water reactors have the option of adding boron to the water to prevent a critical chain reaction, but operators of these reactors must also show that the assemblies will remain sub-critical without the boron. The NRC officials stated that all reactors are required to have a 5-percent margin of safety to prevent a critical chain reaction. In addition to boron in the water, prevention of a critical chain reaction can also be achieved by boron in plates in the racks, spacing among the assemblies, and other storage configurations. After the Fukushima Daiichi nuclear power reactor accident, NRC in March 2012 supplemented existing requirements by issuing an order instructing nuclear power operators to install monitoring equipment to remotely measure a wider range of water levels in spent fuel pools. NRC issued a second order, also in March 2012, that required reactor operators to ensure the effectiveness of water mitigation measures. It is more difficult to provide sprays and replacement water to boiling water reactor pools because they are typically several stories above ground and located close to the reactor, whereas spent fuel pools for pressurized water reactors are at ground level or partially embedded in the ground. At Fukushima Daiichi, cooling flow to the spent fuel pool was lost during the loss of off-site power and was not immediately restored with the use of emergency diesel generators. Emergency operators did not have remote monitoring equipment to determine whether pool water levels had dropped enough to expose the spent fuel. Subsequent inspections, however, determined that water levels did not drop below the top of the fuel assemblies in the pool. As we stated in our 2003 report, air ventilation can mitigate the likelihood of a pool fire in the event of water drainage. Logically, this mitigation potential depends upon where the ventilation occurs and how much ventilation can be created. A Sandia National Laboratories study found that space between assemblies and the pool wall can help promote ventilation, as can doors and vents in the room where the pool is located. Space under the assemblies can be created at the foot of racks supporting fuel assemblies, which allows circulating air to flow up between the assemblies and carry heat away with it in the event of complete drainage of water from the pool. However, according to a study led by a scholar at a community action group, with assemblies packed in dense configurations in racks at most nuclear reactor pools and boron plates lining the racks of assemblies, ventilation may be reduced. Spent nuclear fuel in dry storage is less susceptible to a radiological release of the magnitude of a zirconium fire in a spent fuel pool, according to documents we reviewed and interviews we conducted with officials from NRC, the National Academy of Sciences, and the Nuclear Waste Technical Review Board; officials from industry; and representatives of community action groups. Such a release is less likely for the following reasons: Spent fuel cools rapidly, and spent fuel in dry storage—typically at least 5 years old—has cooled sufficiently so that ignition is less likely. In addition, passive air cooling in dry cask storage systems is not affected by the loss of off-site power, and active monitoring—other than ensuring that air vents are not clogged—is not necessary to prevent overheating and possible ignition. The amount of radioactive material in a dry storage canister is a fraction of the amount of radiation in a spent fuel pool. According to the National Academy of Sciences’ 2006 study, each dry storage canister contains 32 to 68 fuel assemblies—whereas thousands of assemblies are typically stored in pools—and therefore each canister has less radioactive material that can be released than the radiation from a pool. Logically, breaching dozens of spent fuel canisters simultaneously could result in more severe consequences than a single breached canister, but breaching dozens of canisters simultaneously is difficult. To trigger any severe off-site radiological release from spent fuel stored in a canister, the fuel would have to undergo aerosolization, which would entail breaching the outer and inner shielding units. Furthermore, any holes would have to be sufficiently large enough to allow release of the aerosolized spent fuel. It would be difficult to aerosolize radioactive material in dry storage and difficult to have some mechanism to transport the radioactive material away from the reactor site. Such mechanisms would require energy, such as a fire. Dry storage is not as susceptible to the buildup of hydrogen as are spent fuel pools. If an accident or attack involving a spent fuel pool causes a loss of water, the fuel assemblies can heat up and produce steam. This steam can react with the hot zirconium cladding surrounding the fuel assemblies, producing hydrogen that, when mixed with oxygen, could cause an explosion and structural damage to the reactor building. As we reported in our 2003 study, NRC had concluded before September 11, 2001, that spent fuel in dry cask storage systems was considered safe and secure. A Sandia National Laboratories study conducted from 2003 through 2005, supplemented by NRC analyses, evaluated several representative types of dry cask storage designs against airplane and ground attacks to determine if any other security measures were needed, in addition to those already issued by order. This work did not find that any further mitigating or security procedures were needed for nearly all the scenarios, but it did identify some potential scenarios in which some radiation could be released. This study helped inform NRC’s technical evaluation—first discussed internally at NRC in 2007, according to NRC officials, and published for solicitation of public comments in 2009. This evaluation included a proposal to establish a security-based dose limit that would require owners of spent fuel in dry storage systems to develop site security strategies to protect against a potential radiological release that exceeds NRC’s acceptable dose limits at a site boundary. NRC issued this evaluation for public comment for a proposed rule to revise security requirements for storing spent fuel away from a reactor. During the public comment period, NRC received general comments showing a preference for guarding against a specific threat rather than the dose-based approach proposed in the technical evaluation. For example, under the dose-based approach, some owners told NRC that they might have to increase their security forces to prevent potential radiological releases, and they raised concerns about the cost of such efforts compared with the benefit. As a result, according to NRC officials, the agency has delayed the proposed rule in order to gather more information regarding the public comments. NRC officials told us the agency plans to commission additional studies to help assess the situation and determine the appropriate security strategy. In conducting our work, we found that NRC does not have a mechanism to ensure that it can easily identify and locate all classified studies conducted over the years. When we requested classified and other studies from NRC officials, it was difficult for them to provide us with the information we requested in a timely manner. Specifically, nearly 5 months elapsed from our initial request for classified studies of wet storage until NRC provided these documents. A National Academy of Sciences official told us that the academy had also experienced difficulty in obtaining some of NRC’s classified studies while performing its 2004 study. To identify studies, we interviewed numerous NRC and other officials and identified studies through references in other studies we reviewed. NRC officials said the classified studies are stored in the safes of NRC officials. We also contacted officials from Sandia National Laboratories and requested a list of all their studies on spent fuel safety and security. NRC officials told us that developing and maintaining a classified database covering the most important topics involving spent fuel, as designated by agency management, would not be burdensome. Managing spent fuel until permanently disposed of may take many decades, and NRC and DOE managers and staff and operators with appropriate clearances may need to review an extensive number of classified studies conducted for NRC on the safety and security of spent fuel. Several studies conducted after September 11, 2001, by NRC and other groups referred to NRC studies conducted before that date—some conducted as early as 1979. We also found decades-old NRC studies to still be useful in our review. The nature and characteristics of spent fuel discharged from a reactor likely will not change, and therefore the underlying principles and knowledge of spent fuel safety and security are likely to remain applicable and informative to future scientists and others. Although preserving key scientific and technical studies is important, preservation of information alone is not enough if others may not be aware of a study’s existence or location. Scientists and others rely on mechanisms that allow them to easily identify, locate, and access pertinent information, as well as to prevent unnecessary duplication of research. Transferring spent fuel from wet to dry storage is generally safe and offers several key benefits, but any movement of spent fuel entails some level of risk. Accelerating the transfer of spent fuel from wet to dry storage to reduce the inventory of spent fuel in a pool could increase those risks. Additional operational and other challenges to accelerating the transfer of spent fuel to dry storage may limit the degree of acceleration that may ultimately be achieved. Once spent fuel is in dry storage, additional challenges may arise, such as costs for repackaging should it be needed. The transfer of spent fuel from wet to dry storage and long-term storage at reactor sites, although not originally part of the plan for managing spent fuel, has offered some benefits, according to our analysis of documents and interviews with NRC officials, representatives from industry, and community action and environmental groups. For example, without a permanent means of disposing of spent nuclear fuel for at least several decades, the transfer of spent fuel from pools to dry storage has provided the nation with time to develop a more permanent solution. We previously reported—on the basis of input from experts—that dry storage is considered safe for at least 100 years and is easily retrievable. Moreover, because most spent fuel pools are nearly at capacity, reactor operators must transfer as much spent fuel to dry storage as is discharged from the reactor. According to our analysis of input from these officials and representatives, accelerating the transfer of spent fuel from wet to dry storage may offer the following additional benefits: Reducing the potential consequences of pool fires. An accelerated transfer of spent fuel to dry storage may return the pools to a low- density, open-frame configuration that could reduce potential consequences should an unintended release of radiation occur from a pool fire. Accelerated transfer has been advocated by more than 150 community action and environmental groups. Potentially increasing the volume of transportation-ready spent fuel. Accelerating the transfer of spent fuel to dry storage could increase the volume of readily transportable spent fuel for ease of removal to an off-site facility for storage, reprocessing, or disposal, with the caveat that reactor operators take steps to ensure that canisters and their contents meet transportation requirements. In addition, we note that once a reactor is decommissioned, spent fuel is less expensive to safeguard in dry storage than in wet storage. Specifically, we previously reported that the cost of operating a spent fuel pool at a decommissioned reactor could range from about $8 million to nearly $13 million a year but that the cost of operating a dry storage facility might amount to about $3 million to nearly $7 million per year. Nine reactor sites nationwide are currently shut down and partly decommissioned and have already transferred all their spent fuel to dry storage or are in the process of doing so, with plans to remove their spent fuel pools. A tenth site never had an operating reactor but was built as an interim storage pool in anticipation of reprocessing. The operators of this site have not announced any plans to transfer spent fuel to dry storage. Accelerating the transfer of spent fuel from wet to dry storage entails some operational challenges, and some industry representatives told us that they have questioned whether the cost of overcoming these challenges is worth the benefit, particularly considering the low probability of a catastrophic release of radiation. Furthermore, in a 2003 response to a recommendation by the Institute of Policy Analysis to accelerate the transfer of spent fuel from wet to dry storage to reduce the likelihood and potential consequences of a pool fire, NRC reported that accelerating the transfer of spent fuel is not justified, particularly given the billions of dollars it will cost, with no appreciable increase in safety. In commenting on a draft of this report, NRC reiterated this position, stating that it does not require the accelerated transfer of spent fuel to dry storage, particularly considering the small increase in safety that could be achieved, because it considers both wet and dry storage to be safe under current regulations. The studies that NRC provided to us on the safety and security of spent fuel did not include any comprehensive analysis of the advantages and disadvantages of accelerating the transfer of spent fuel from wet to dry storage. However, NRC officials stated that the commission is currently evaluating accelerated transfer of spent fuel to dry storage as part of a larger review of lessons learned from the Fukushima event. The officials stated that the evaluation will allow NRC to determine whether regulatory action is needed to require accelerated transfer of spent fuel. NRC officials have stated that they believe they can complete their planned evaluation within about 5 years. Some of the challenges from accelerating the transfer of spent fuel include the following: Increasing the need for skilled workers and potential radiation doses to those workers. Workers at reactors face radiation exposure during routine transfer of spent fuel from wet to dry storage, particularly during loading operations, but this risk could increase if transfer were accelerated, according to a 2010 analysis by EPRI. The institute estimated worker exposure rates, assuming transfer of spent fuel in generic reactors both at the rate of current practice and at an accelerated rate. At the rate of current practice, EPRI reported, workers would collectively receive a dose of 15,836 rem over a nearly 90-year period associated with transferring the expected inventory of about 140,000 metric tons from wet to dry storage, performing annual maintenance and inspection of the dry storage systems, and constructing additional dry storage systems if additional dry storage capacity is needed. Assuming an accelerated rate of transfer after 5 years of cooling, EPRI calculated that worker dose would increase by 507 rem, or 3 percent, as a result of the transfer, maintenance and inspection, and construction duties performed over the same 90-year period. Assuming worker exposure rates would remain roughly the same, the additional 507 rem under an accelerated transfer scenario would represent the equivalent of an estimated 1,500 workers. Furthermore, EPRI has reported that industry is moving to high-burn- up fuel for greater efficiency. But this high-burn-up fuel is hotter and more radioactive than conventional fuel and requires cooling for about 7 years before it can be safely transferred to dry storage. If transfer is accelerated, this high-burn-up fuel could potentially increase worker dose. NRC, A Survey of Crane Operating Experience at U.S. Nuclear Power Plants from 1968 through 2002, NUREG-1774 (Washington, D.C.: July 2003). assemblies or the pool liner, potentially leading to water drainage. A single fuel assembly from a boiling water reactor weighs about 700 pounds, and a single fuel assembly from a pressurized water reactor weighs about 1,500 pounds; dry storage casks, once fully loaded, can weigh from 100 to 180 tons or more. NRC has provided guidance to industry to take steps to minimize damage from such a drop, such as using overhead cranes with special added safety features so that a single failure will not result in dropping a damaging load or developing handling routes designed to avoid lifting heavy loads over vulnerable equipment.44, Working within time constraints. Timing preferences and operational limitations could constrain how much spent fuel is transferred in a given year and may present an obstacle to accelerated transfer from wet to dry storage. Industry representatives told us that under current practice, reactor operators prefer to transfer spent fuel to dry storage during periods of time that do not interfere with refueling, receiving new fuel, required inspections, and maintenance or other activities vital to plant operations. These activities typically consume about 8 to 9 months of each year’s calendar. A routine dry storage loading operation may take 2 months or more, according to industry representatives. For example, one industry representative told us that it can take about 2 weeks to mobilize workers and equipment before the operation and about 2 more weeks to demobilize after the operation. Additionally, according to industry representatives at one operating reactor site we visited, each canister takes about 1 week to load, dry, seal, and move to a storage pad, which limits the number of canisters that can be loaded in a given year. In addition, spatial limitations—such as space for drying or welding lids onto multiple canisters, limited heavy lifting capabilities, and lack of free space in spent fuel pools to accommodate more than one cask at a time—may make simultaneous loading of canisters difficult. Some industry representatives we spoke with told us that there are limits on how much acceleration can be achieved in a single year. NRC, Single-Failure-Proof Cranes for Nuclear Power Plants, NUREG-0554 (Washington, D.C.: May 1979). NRC, Control of Heavy Loads at Nuclear Power Plants: Resolution of Generic Technical Activity A-36, NUREG-0612 (Washington, D.C.: July 1980). Increasing costs. The transfer of spent fuel from wet to dry storage is costly in several ways. We estimated in a November 2009 report that the transfer cost for about five canisters is about $5.1 million to $8.8 million. One industry representative told us that if the transfer of spent fuel to dry storage were accelerated, the associated high up- front costs could strain some nuclear power plants’ budgets. These up-front costs, which would be incurred over a longer period without acceleration, include the construction of a storage pad with accompanying safety and security features, which, we reported, could cost about $19 million to $44 million. These costs are initially borne by ratepayers or plant owners but may be passed on to taxpayers as a result of industry lawsuits against DOE for failure to take custody of the spent fuel. Moreover, EPRI reported that as older, cooler spent fuel is loaded into canisters, reactor operators eventually will be left with younger, hotter spent fuel to transfer from wet to dry storage. Spent fuel stored in canisters generally should not exceed about 752 degrees Fahrenheit (400 degrees Celsius), and, as we reported earlier, spent fuel being discharged from reactors today may have to cool at least 7 years before it can be placed in dry storage. Given the heat load requirements for storing spent fuel, EPRI noted that it may not be possible to fill some canisters to capacity. Specifically, a canister with a capacity for 60 boiling water reactor assemblies that would store 60 older, cooler assemblies may be able to contain only 38 younger, hotter assemblies. Reactor operators had never intended to leave spent fuel on their sites for extended periods, but even if the United States began to develop an off- site centralized storage or disposal facility today, spent fuel—which has already been stored on-site for several decades—would be stored on-site for several decades more. As a result, the following challenges could affect decisions on managing spent fuel. Repackaging stranded spent fuel. Once reactors are decommissioned, reactor operators have limited options for managing the stored spent fuel. Specifically, once they package the spent fuel in canisters and dry casks, they are unlikely to have any means of repackaging if the canisters degrade over the long term, or if the operators have to meet different storage or disposal requirements. As we previously reported, experts told us that canisters are likely safe for at least 100 years, but by then the spent fuel may have to be repackaged because of degradation. By the time such repackaging might be needed, reactor operators may no longer have pools or the necessary infrastructure to undertake the repackaging, as was the case at the Haddam Neck site we visited. Specifically, the Haddam Neck site had already decommissioned the reactor, transferred all its spent fuel from wet to dry storage, and dismantled its spent fuel pool. If the spent fuel at the site needed to be repackaged, a special transfer facility would need to be built, or the spent fuel would need to be shipped to a site that had a transfer facility. In addition, to reduce costs, reactor operators are selecting a variety of dry storage systems that maximize storage capacity. These varied systems do not raise safety issues, but they may complicate a transfer to a centralized storage facility or a permanent disposal facility because different systems require different handling requirements, such as the type of grappling hook and the size of the transport cask required. These differences may present more complex engineering challenges and cost issues as time passes, and the volume of spent fuel in various systems increases. In addition, over time, it is possible that handling equipment would not be maintained and personnel would not continue to be trained. Maximizing storage capacity may raise additional engineering challenges and cost issues, particularly since larger canisters may meet storage requirements but not transportation requirements. The Nuclear Energy Institute has reported that of all the spent fuel currently in dry storage, only about 30 percent is directly transportable. It also reported that the remaining spent fuel could need as much as 10 more years of cooling to meet NRC’s transportation heat-load requirements to ensure that assemblies can withstand the force of a potential accident. Reducing community opposition. As reactors begin to be closed down and decommissioned, reactor operators will leave spent fuel on sites that will serve no other purpose than storing that fuel. Continued on-site storage would likely face increasing community opposition, which could make it difficult for operators to obtain NRC recertification for storage sites at reactors, approval for licenses to extend the operating life of other reactors, or licenses for new reactors. According to officials from a state regional organization we spoke with, the longer the federal government defers a permanent disposition pathway for spent fuel, the less likely the public would be to accept interim solutions, for fear such solutions would become de facto permanent solutions. Also, in our prior work, experts noted that many commercial reactor sites are not suitable for long-term storage and that none have had an environmental review to assess the impacts of storing spent fuel beyond the period for which the sites are currently licensed. As discussed above, in June 2012, a federal appellate court remanded NRC’s waste confidence determination and rule for the preparation of an environmental impact statement or finding of no significant environmental impact. Managing costs. Continued storage of spent fuel may be costly. Because owners of spent fuel would have to safeguard it beyond the life of currently operating reactors, decommissioned reactor sites would not be available to local communities and states for alternative development. The Blue Ribbon Commission recommended that the nation open one or more centralized storage facilities and put a high priority on transferring the so-called stranded spent fuel to free decommissioned reactor sites for other uses. We previously reported the cost of developing two federal centralized storage facilities to be about $16 billion to $30 billion, although this estimate does not include final disposal costs, which could cost tens of billions of dollars more. In addition, we also previously reported that if spent fuel needs to be repackaged because of degradation, repackaging could cost from $180 million to nearly $500 million, with costs depending on the number of canisters to be repackaged and whether a site has a transfer facility, such as a storage pool. A license is required for delivery of licensed material to a carrier for transport or for the transport of licensed material. 10 C.F.R. § 71.3 (2012). after transportation because of uncertainties over the condition of large amounts of high-burn-up fuel that might have to be repackaged for disposal. As a result, NRC stated that until further guidance is developed, the transportation of high-burn-up fuel will be handled on a case-by-case basis using the criteria given in current regulations. Without a standardized cask design for storage, transportation, and disposal, it may be difficult to design the type of large-scale transportation program needed to transfer high-burn-up fuel away from reactor sites. Maintaining security over the long term. Future security requirements for the extended storage of spent fuel are uncertain and could pose additional challenges. Specifically, before the September 11, 2001, terrorist attacks, spent nuclear fuel was largely considered to be self- protecting for several decades because its very high radiation would prevent a person from handling the material without incurring health or life-threatening injury in a very short time, although incapacitating health In addition, as impacts may sometimes not occur for up to 16 hours. spent fuel decays over time, it produces less decay heat. A spent fuel assembly can lose nearly 80 percent of its heat 5 years after it has been removed from a reactor and 95 percent of its heat after 100 years. Given the willingness of terrorists in recent years to sacrifice their lives as part of an attack, the national and international communities have begun to rethink just how long spent fuel really might be self-protecting. As spent fuel ages and becomes less self-protecting, additional security precautions may be required. Continuing taxpayer liabilities. The continued on-site storage of spent fuel will not alleviate industry’s lawsuits against DOE for failure to take custody of the spent fuel in 1998 as required by contracts authorized under the Nuclear Waste Policy Act of 1982, as amended. DOE estimates that the federal government’s liabilities resulting from the lawsuits will be about $21 billion through 2020 and about $500 million each year after that. These costs are paid for by the taxpayer through the Department of the Treasury’s Judgment Fund. The International Atomic Energy Agency, DOE, and NRC have considered spent fuel to be self-protecting with a radiation level exceeding 100 rad—or, radiation absorbed dose, a unit of measurement—per hour at 1 meter unshielded. After short-term exposure to 250 to 500 rad, about 50 percent of the people coming in contact with the spent fuel would be expected to die within 60 days. The decades-old problem of where to permanently store commercial spent nuclear fuel remains unsolved even as the quantities of spent fuel—in either wet or dry storage—continue to accumulate at reactor sites across the country. It is not yet clear where a repository will be sited, but it is clear that it may take decades more to site, license, construct, and ultimately open a disposal site. In the interim, some scientists, environmentalists, community groups, and others have expressed growing concerns about the spent nuclear fuel that is densely packed in spent fuel pools, especially after the water in the pools at the Fukushima Daiichi nuclear power plant complex in Japan were at risk of being depleted, increasing the risk of widespread radioactive contamination. The chances of a radiation release are extremely low in either wet or dry storage, but the event with the most serious consequences—a self- sustaining fire in a spent fuel pool—could result in widespread radioactive contamination. NRC has studied the likelihood of such an event and has taken a number of steps to prevent a fire, including a number of mitigating measures, though some community action groups have raised questions if those steps are enough, given the severity of consequences. Moreover, because storage or disposal facilities may take decades to develop, in managing spent fuel, NRC and DOE officials and others with appropriate clearances and a need to know may need to review classified studies conducted by and for NRC on the safety and security of spent fuel. These studies are likely to be relevant for decades and, therefore, continue to contribute to institutional knowledge and the ultimate decisions made concerning the handling and storage of spent nuclear fuel. Nevertheless, NRC does not have a mechanism that allows for easy identification and location of classified studies conducted over the years. Without such a mechanism, it may be difficult and time-consuming to access the necessary studies. To help facilitate decisions on storing and disposing of spent nuclear fuel over the coming decades, we recommend that the Chairman of the Nuclear Regulatory Commission direct agency staff to develop a mechanism that allows individuals with appropriate clearances and the need to know to easily identify and access classified studies so as to help ensure that institutional knowledge is not lost. We provided NRC with a draft of this report for review and comment. In written comments, which are reproduced in appendix IV, NRC generally agreed with the findings and the recommendation in our report. NRC did note, however, that our characterization of NRC’s position to not require accelerated transfer of spent fuel to dry storage was factually incorrect. Specifically, NRC stated that we characterized its position on accelerated transfer as being solely a cost-benefit decision. NRC stated that it does not require accelerated transfer because it considers both wet and dry storage to provide a safe means of storing spent fuel that is in full conformance with agency regulations. We clarified the report language to more clearly state NRC’s position. Regarding the recommendation, NRC stated that it planned to review its internal procedures to determine if any measures need to be taken to ensure the classified information is readily available to future decision makers. NRC also provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of the Nuclear Regulatory Commission, the Secretary of Energy, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine the amount of spent fuel projected to accumulate before it can be moved from individual reactor sites, we obtained data from the Nuclear Energy Institute, an industry advocacy organization, on current inventories of commercial spent nuclear fuel in wet and dry storage and a database on year-to-year projections of on-site spent fuel accumulation in wet and dry storage. We developed the projections of this amount on the basis of several assumptions, including that all 104 reactors would renew their licenses for 20 years, with the early shutdown of Oyster Creek, in New Jersey, 10 years before its license expires; that no new reactors are brought online; that the nation’s current reactors continue to produce spent fuel at the same rate; and that all spent fuel remaining in wet storage would be moved to dry storage 12 years after a reactor’s final shutdown. As part of our analysis, we obtained information in reports and from interviews from the Nuclear Regulatory Commission (NRC); the Department of Energy (DOE); the Electric Power Research Institute, a nonprofit research entity; and representatives from industry, academia, and community action and environmental groups. To assess the reliability of existing data, we reviewed available documentation and conducted interviews with individuals knowledgeable about the data. On the basis of this information, we found these data to be sufficiently reliable for the purposes of our report. To determine the most likely options for moving spent fuel off-site, we used prior work that had analyzed the Yucca Mountain program and its most likely alternatives to help us assess three scenarios: (1) Yucca Mountain, (2) two federally funded central storage facilities, and (3) a new permanent disposal facility. We used assumptions from our prior work, including updating dates from our assumptions, and we supplemented these assumptions by reviewing documents and interviewing officials from federal and state regional organizations and representatives from industry, independent groups, and community action and environmental groups. Specifically, for the Yucca Mountain option, we asked DOE how long it would take for a repository at Yucca Mountain to open if licensing were to resume in 2012, assuming the license and funding were both approved. DOE told us that the best way to develop a new estimate would be to take the estimates that existed before the program was shut down and add the time elapsed between when DOE stopped work on licensing and when it may resume licensing, which is 10 years. We previously reported, however, that DOE’s original estimate for licensing was likely too optimistic. Furthermore, because all of DOE’s former Yucca Mountain program staff have been assigned to other offices, left the agency, or retired, some delays are likely in reassembling a licensing team—as much as 2 years, according to one former DOE official familiar with the Yucca Mountain program. Given these challenges, we added 5 additional years to DOE’s original 10-year estimate of completing Yucca Mountain. If licensing for the Yucca Mountain program were to resume in 2012, the earliest possible opening date is roughly 2027. For the two federal centralized storage facilities, we updated dates we developed for a prior report, in which we projected when the centralized storage facilities might be built, which was 19 years. Since these are rough estimates, we rounded the time frame to 20 years, meaning that if the process were started in 2012, the earliest that two federal centralized storage sites could open would be 2032. For a new repository, we analyzed DOE’s actual and projected time frames for licensing and opening the Yucca Mountain repository and DOE’s report to Congress on the time frames necessary to open a second repository. We also analyzed the time frames necessary to open the nation’s only high- level radioactive disposal facility, the Waste Isolation Pilot Plant in New Mexico. On the basis of our analysis, we determined that if a process were started in 2012 to open a new repository, it could open in about 40 years, or 2052. To determine key safety and security risks of spent fuel, as well as potential mitigation actions, we reviewed NRC-commissioned studies performed by Sandia National Laboratories and studies by NRC, the National Academy of Sciences, community action groups, and industry. Our primary period of focus was post-September 11, 2001, which included studies from 2002 to 2009, but we also reviewed pre-September 11, 2001, studies dating back to 1979. We identified relevant studies for review by asking officials from NRC, DOE, and Sandia National Laboratories, as well as knowledgeable persons whom we interviewed, and by reviewing the citations in these studies to identify still other relevant studies. We reviewed studies of spent fuel pools and dry casks at the classified, NRC safeguards, official use only, and unclassified levels. In addition, we toured the Haddam Neck decommissioned reactor site and the Millstone reactor in Connecticut, the Hope Creek and Salem reactors in New Jersey, and the Susquehanna reactor in Pennsylvania, and we spoke with NRC officials and industry representatives about wet and dry spent fuel storage issues, including potential mitigation actions, at these sites. Our site visits included decommissioned and operating reactor sites, sites with both pressurized water reactors and boiling water reactors, sites having both wet and dry storage, and sites using both vertical and horizontal dry storage systems. We also reviewed NRC requirements addressing the safety and security of spent fuel, as well as directives from the nuclear power industry. To determine the benefits and challenges of transferring spent fuel from wet to dry storage, including transferring this fuel at an accelerated rate, we reviewed prior GAO reports and documents from NRC, DOE, the Nuclear Waste Technical Review Board, the National Academy of Sciences, the Blue Ribbon Commission on America’s Nuclear Future, academia, industry, and community action and environmental groups. We also interviewed officials from NRC, DOE, and state regional organizations, and representatives of industry, academia, the Blue Ribbon Commission on America’s Nuclear Future, and community action and environmental groups. We spoke with industry representatives and NRC inspectors at the decommissioned and operating reactor sites we visited. In our interviews, we asked for their views on the benefits and challenges of transferring spent fuel from wet to dry storage and the benefits and challenges of accelerating that transfer. To further determine the cost considerations for transferring spent fuel from wet to dry storage, we updated cost component estimates developed for our 2009 report to constant 2012 dollars. In that report, we obtained information from a small group of experts to develop initial assumptions, which we then provided to a larger set of nearly 150 experts for comment. We conducted this performance audit from June 2011 to August 2012, in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Like the United States, other countries produce electricity from nuclear power reactors and have programs to manage their spent nuclear fuel. Table 2 provides a brief description of the programs in selected countries. Our report identified three scenarios in which spent fuel could be moved to an off-site location. Briefly, the earliest likely opening date if the Yucca Mountain repository were to be licensed and constructed is about 2027, the earliest a centralized storage facility could be expected to open is about 2032, and the earliest a permanent disposal facility that was an alternative to Yucca Mountain could be expected to open is about 2052. Table 3 summarizes the amount of spent fuel that is expected to accumulate in each state for these dates, as well as 2012—the current spent fuel accumulation—and 2067, when all currently operating commercial nuclear power reactors are expected to have retired and transferred their spent fuel to dry storage. The table also shows the rank for each state in terms of the amount of its accumulated spent fuel in comparison with the other states. In addition to the individual named above, Janet E. Frisch (Assistant Director), Antoinette Capaccio, Virginia Chanley, Ellen W. Chu, Randall Cole, R. Scott Fletcher, Cristian Ion, Mehrzad Nadji, Kevin Remondini, Robert Sánchez, Carol Shulman, Kiki Theodoropoulos, and Franklyn Yao made key contributions to this report.
Spent nuclear fuel, the used fuel removed from nuclear reactors, is one of the most hazardous substances created by humans. Commercial spent fuel is stored at reactor sites; about 74 percent of it is stored in pools of water, and 26 percent has been transferred to dry storage casks. The United States has no permanent disposal site for the nearly 70,000 metric tons of spent fuel currently stored in 33 states. GAO was asked to examine (1) the amount of spent fuel expected to accumulate before it can be moved from commercial nuclear reactor sites, (2) the key risks posed by stored spent fuel and actions to help mitigate these risks, and (3) key benefits and challenges of moving spent nuclear fuel out of wet storage and ultimately away from commercial nuclear reactors. GAO reviewed NRC documents and studies on spent fuel’s safety and security risks and industry data, interviewed federal and state government officials and representatives from industry and other groups, and visited reactor sites. The amount of spent fuel stored on-site at commercial nuclear reactors will continue to accumulate—increasing by about 2,000 metric tons per year and likely more than doubling to about 140,000 metric tons—before it can be moved off-site, because storage or disposal facilities may take decades to develop. In examining centralized storage or permanent disposal options, GAO found that new facilities may take from 15 to 40 years before they are ready to begin accepting spent fuel. Once an off-site facility is available, it will take several more decades to ship spent fuel to that facility. This situation will be challenging because by about 2040 most currently operating reactors will have ceased operations, and options for managing spent fuel, if needed to meet transportation, storage, or disposal requirements, may be limited. Studies show that the key risk posed by spent nuclear fuel involves a release of radiation that could harm human health or the environment. The highest consequence event posing such a risk would be a self-sustaining fire in a drained or partially drained spent fuel pool, resulting in a severe widespread release of radiation. The Nuclear Regulatory Commission (NRC), which regulates the nation’s spent nuclear fuel, considers the probability of such an event to be low. According to studies GAO reviewed, the probability of such a fire is difficult to quantify because of the variables affecting whether a fire starts and spreads. Studies show that this low-probability scenario could have high consequences, however, depending on the severity of the radiation release. These consequences include widespread contamination, a significant increase in the probability of fatal cancer in the affected population, and the possibility of early fatalities. According to studies and NRC officials, mitigating procedures, such as replacement water to respond to a loss of pool water from an accident or attack, could help prevent a fire. Because a decision on a permanent means of disposing of spent fuel may not be made for years, NRC officials and others may need to make interim decisions, which could be informed by past studies on stored spent fuel. In response to GAO requests, however, NRC could not easily identify, locate, or access studies it had conducted or commissioned because it does not have an agencywide mechanism to ensure that it can identify and locate such classified studies. As a result, GAO had to take a number of steps to identify pertinent studies, including interviewing numerous officials. Transferring spent fuel from wet to dry storage offers several key benefits, including safely storing spent fuel for decades after nuclear reactors retire—until a permanent solution can be found—and reducing the potential consequences of a pool fire. Regarding challenges, transferring spent fuel from wet to dry storage is generally safe, but there are risks to moving it, and accelerating the transfer of spent fuel could increase those risks. In addition, operating activities, such as refueling, inspections, and maintenance, may limit the time frames available for transferring spent fuel from wet to dry storage. Once spent fuel is in dry storage, there are additional challenges, such as costs for repackaging should it be needed. Some industry representatives told GAO that they question whether the cost of overcoming the challenges of accelerating the transfer from wet to dry storage is worth the benefit, particularly considering the low probability of a catastrophic release of radiation. NRC stated that spent fuel is safe in both wet and dry storage and that accelerating transfer is not necessary given the small increase in safety that could be achieved. To help facilitate decisions on storing and disposing of spent nuclear fuel over the coming decades, GAO recommends that NRC develop a mechanism for locating all classified studies. NRC generally agreed with the findings and the recommendation in the report.
Enemy sea mines were responsible for 14 of the 18 Navy ships destroyed or damaged since 1950, and producing countries have developed and proliferated mines that are even more difficult to detect and neutralize. After the Gulf War, during which two Navy ships were severely damaged by sea mines, the Navy began several actions to improve its mine warfare capabilities. The Navy’s current MCM capabilities are in a special purpose force that consists of 12 mine-hunter, coastal (MHC) and 14 MCM ships, 1 command and support ship, 24 mine-hunting and clearing helicopters, 17 explosive ordnance disposal detachments, a very shallow water detachment, and a marine mammal detachment. According to the Navy, the cost of operating and maintaining this MCM force from fiscal year 1992 through 2003 will be about $1.9 billion. Because the Navy’s MCM ships lack the speed and endurance they would need to accompany carrier battle groups and amphibious ready groups on overseas deployments, the Navy has changed its strategy of maintaining only a special purpose force to also developing mine countermeasure capabilities to be placed on board combat ships within the fleet. The Navy has consolidated operational control of all surface and airborne mine warfare forces under the Commander, Mine Warfare Command, and improved the readiness of these forces through exercises and training. The Navy also initiated research and development projects to address the weaknesses in its MCM program, especially the lack of on-board MCM capability throughout the fleet, and created a Program Executive Office for mine warfare, which brought together disparate MCM programs and their associated program management offices. In a prior report, we discussed weaknesses in the Navy’s ability to conduct effective sea mine countermeasures. We reported that critical MCM capabilities were unmet and reviewed the Navy’s efforts to address these limitations. At that time, the Navy had not established clear priorities among its mine warfare research and development programs to sustain the development and procurement of the most needed systems. Consequently, the Navy experienced delays in delivering new systems to provide necessary capabilities. DOD concurred with our recommendation that a long-range plan be developed to identify gaps and limitations in the Navy’s MCM capabilities and establish priorities. DOD said the process was ongoing and consisted of developing an overall concept of MCM operations and an architecture within which needs and shortfalls in capabilities could be evaluated and prioritized. DOD also said that critical programs would be identified and funded within the constraints of its overall budget. Congress previously expressed its concern that the Navy had failed to sufficiently emphasize mine countermeasures in its research and development program and noted the relatively limited funding allocation. As a result, mine warfare programs were designated as special congressional interest items. To support continuing emphasis on developing the desired mine countermeasures, Congress added a certification requirement in the National Defense Authorization Act for fiscal years 1992 and 1993. This required the Secretary of Defense to certify that the Secretary of the Navy, in consultation with the Chief of Naval Operations and the Commandant of the Marine Corps, had submitted an updated MCM master plan and budgeted sufficient resources for executing the updated plan. It also required the Chairman of the Joint Chiefs of Staff to determine that the budgetary resources needed for MCM activities and the updated master plan are sufficient. This certification requirement will expire with the fiscal year 1999 budget submission unless it is renewed. Although it has developed a strategy for overcoming deficiencies in its MCM capabilities, the Navy has not decided on the composition and size of its future on-board and special purpose MCM force. Navy officials have acknowledged the need to maintain some special purpose MCM force, while the Navy is moving toward an on-board MCM capability. The Navy currently has no on-board MCM capabilities and relies on a force of MCM capabilities that are specifically dedicated to that mission. The Navy has two assessments in progress to develop the information it needs to decide on the mix of its future on-board and special purpose forces. The objectives of these assessments are to determine (1) the quantities and types of on-board MCM systems the Navy will need to procure to meet fleet requirements in fiscal years 2005-2010; (2) the optimal force mix to meet fleet requirements in the 21st century; and (3) the numbers and types, if any, of special purpose MCM assets that will still be needed in the fiscal year 2010-2015 time frame. Initial results are expected to be available in October 1998, in time to influence the development of the fiscal year 2001 Navy resource program, with a final report in January 1999. Navy officials do not expect this phase of the assessments to provide them all of the information that is needed to tailor the future MCM force structure. They do expect, however, that it will give them a good idea of how to plan procurement, training, and maintenance for the on-board systems expected to be deployed in the fiscal year 2001-2005 time frame. To address the lack of on-board capability, the Navy accelerated the delivery of a Remote Minehunting System and established a contingency shallow-water mine-hunting capability in one Navy Reserve helicopter squadron using laser mine detection systems, and it is including mine-hunting systems in upgrades to existing and in new construction submarines. Maintaining the special purpose force is costly, and Navy resource managers have been evaluating how to pay for the operations and support costs of this force while pursuing costly development of on-board capabilities. A final force structure decision will likely be driven by the level of resources the Navy intends to dedicate to the MCM mission in the future—a decision that depends on numerous issues outside the MCM arena such as conflicting funding priorities among the various Navy warfare communities (aircraft, surface ships, and submarines). A decision on the future force structure is, however, still needed because that decision will determine the types and quantities of systems to be procured, set priorities among systems, and determine the level of resources required for development, procurement, and sustainment. For example, the Navy is currently debating whether to retire the current mine-hunting helicopters, the MH-53, in favor of maintaining only H-60 series helicopters. This helicopter decision will directly affect the types and quantity of airborne MCM capabilities the Navy will be able to field in the future. Since 1992, the Navy has invested about $1.2 billion in RDT&E funds to improve its mine warfare capabilities. The Navy plans to spend an additional $1.5 billion for RDT&E over the next 6 years. It is currently managing 28 separate MCM development programs and several advanced technology and advanced concept technology demonstrations. (See app. I for the status of selected programs.) So far, according to a Navy official, this investment has not produced any systems that are ready to transition to production. A few systems, such as the Airborne Mine Neutralization System, the Shallow-Water Assault Breaching system, Distributed Explosive Technology, and a Closed Loop Degaussing system, are scheduled for a production decision over the next 2 to 3 years. Other systems, such as communications data links for the MH-53 helicopters and the airborne laser mine-detection system (Magic Lantern Deployment Contingency), were not produced because the Navy never funded their procurement. Delays experienced in a number of MCM development programs result from the same kinds of problems that are found in other DOD acquisitions such as funding instability, changing requirements, cost growth, and unanticipated technical problems. For example, although the MCM funding program is small, the Navy has reduced funding for its MCM research and development programs after budget approval. (See app. II for two program examples.) These problems in MCM acquisition programs show that the design, development, and production of needed systems are complex and that technical processes must operate within equally complex budget and political processes. If programs are not well conceived, planned, managed, funded, and supported, problems such as cost growth, schedule delays, and performance shortfalls can easily occur. Two examples of mine warfare programs that have been in the research and development phase for many years without advancing to procurement are the AQS-20, an airborne mine-hunting sonar, and the Airborne Mine Neutralization System. The AQS-20 began in 1978 as an exploratory development model and was scheduled for a limited rate initial production decision in fiscal year 1999. The Navy terminated the program in 1997 in favor of a follow-on sonar, the AQS-X, with added mine identification capability and a tow requirement from a H-60 helicopter instead of a MH-53 helicopter. During the intervening 19 years, the program was plagued by cost growth, changing requirements, and a funding shortfall. The development of the Airborne Mine Neutralization System began in 1975, but a production decision is not scheduled until fiscal year 2000. The principal reason for the delay is that the program was canceled and restarted two times because of funding instability. Contributing to difficulties in transitioning programs into production are a number of management and internal control weaknesses noted during the annual Federal Manager’s Financial Integrity Act certification. Since 1992, the Program Executive Office has attempted to improve internal controls within five subordinate program offices by developing financial and acquisition management information and reporting systems. At its request, the Naval Audit Service is reviewing the state of internal controls within one of the program offices and expects to issue a report in the fall of 1998. A majority of officials we interviewed said that the annual certification requirement was useful because it served to increase the visibility of MCM requirements within DOD and the Navy. Most said that some form of the certification should continue to be required. However, as currently prepared, the annual certification does not address the adequacy of overall resources for this mission, nor does it provide for objective measures against which progress can be evaluated. Moreover, the Chairman, Joint Chiefs of Staff’s involvement in the certification process occurs too late to have a significant impact. The annual certification does not address the adequacy of overall resources for this mission because the Navy’s budget for MCM programs addresses only the adequacy of funding for the budget year, not the out years. Further, nothing in the certification process provides objective measures against which progress can be evaluated. Such measures have been developed within the MCM community. For example, the time required by a tactical commander to clear a certain area of mines with and without various capabilities could be used in making individual program decisions. Likewise, there are mean times between repairs and average supply delay times to gauge reliability and supportability for the MCM and MHC ships. In the past, the DOD staff has not been willing to challenge Navy decisions regarding the content and adequacy of its MCM program. Instead, it focused on analyzing the consistency of the program from year to year. Consequently, DOD has been able to certify annually that the budget contains adequate resources for the program. However, in November 1997, the Secretary of Defense expressed his concern about the Navy’s financial commitment to mine warfare programs. As a result, the Navy added about $110 million to MCM programs over the future years defense planning period. The inclusion of the Chairman, Joint Chiefs of Staff, in the certification process was intended to give the regional commanders in chief an opportunity to influence the development of the MCM budget. We believe, however, and DOD and Navy officials agree, that the Chairman, Joint Chiefs of Staff’s determination has not added any significant value. Although the Joint Staff has assessed joint MCM requirements and capabilities, its conclusions have not been used as a basis for challenging the Navy’s MCM programs or suggesting alternatives. Moreover, since the Joint Staff’s review has occurred after, rather than before, the Navy’s budget proposals for MCM programs have been formalized, it has had no impact on specific Navy acquisition programs or overall resource decisions. To have an effective program, the Navy needs to decide on the size, composition, and capabilities of its future MCM forces. This decision will assist in prioritizing and disciplining its research, development, and procurement efforts. As with other mission areas, the types and quantities of systems to be procured and their platform integration will most likely be driven by the level of resources the Navy allocates to the MCM mission in the future. What is required is for the Navy leadership and the various warfare communities to agree on the composition and structure (size) of future MCM forces and commit the necessary resources to their development and sustainment. Without such an agreement, budgetary pressures may result in degradation of the special purpose forces before the Navy has demonstrated and fielded effective, on-board capabilities within the fleet. The certification requirement has forced DOD and the Navy to pay increased attention to the MCM mission, and most officials involved support its continuation in some form. However, the certification has not provided any assurance that the resources for the MCM mission are “sufficient” because it has only addressed the adequacy of funding for the particular budget year and because the DOD staff and the Chairman of the Joint Chiefs of Staff have not challenged Navy resource allocation or budget decisions. If the Chairman of the Joint Chiefs of Staff’s involvement in the certification process is still considered important, it must occur in time to influence Navy decisions on requirements and funding. Overall budgetary pressures, the high operations and maintenance costs associated with the special purpose MCM fleet, and the Navy’s expectation of potential increased capabilities from on-board systems still early in development may combine to result in budgetary shifts from current special purpose forces before potential on-board capabilities are realized. We recommend that the Secretary of Defense, in conjunction with the Chairman, Joint Chiefs of Staff, and the Secretary of the Navy, determine the mix of on-board and special purpose forces DOD plans to maintain in the future and commit the funding deemed necessary for the development and sustainment of these desired capabilities. We also recommend that the Secretary of Defense direct the Secretary of the Navy to sustain the special purpose MCM forces until the Navy has demonstrated and fielded effective, on-board capabilities. The certification process has increased DOD’s and the Navy’s attention to the MCM mission. Since the certification requirement is scheduled to expire this year, Congress may wish to consider extending the annual certification requirement until the Navy has determined the mix of on-board and special purpose forces it will maintain in the future and has fielded effective, on-board MCM capabilities. To strengthen the certification process, Congress may wish to consider amending the requirement to ensure that the participation by the Chairman, Joint Chiefs of Staff, occurs before the Navy’s fiscal year budget is submitted to the Office of the Secretary of Defense. In commenting on a draft of this report (see app. III), DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to sustain the special purpose MCM forces until the Navy has demonstrated and fielded effective on-board capabilities. DOD partially concurred with our first recommendation that the Secretary of Defense determine the mix of on-board and special purpose forces DOD plans to maintain in the future and commit the necessary funding. DOD has directed the Navy to ensure that both current and future mine warfare programs are adequately funded. In an April 7, 1998, letter to the Secretary of the Navy, the Secretary of Defense expressed his concern about the Navy’s lack of commitment of the necessary resources to mine warfare and noted that currently, requirements exceed resources allocated. He directed the Navy to (1) protect the mine warfare program from any further funding reductions until some on-board capabilities are available, (2) avoid using the funds currently planned for the special purpose forces to fund the development of on-board capabilities, and (3) develop a future years funding plan that matches requirements with resources. DOD, however, cited the Navy as having primary responsibility for MCM forces, whereas our recommendation was directed to the Secretary of Defense. We agree that the Navy does have primary responsibility, but the Secretary of Defense has had a special role through the certification process. As we conclude in the report, the certification requirement has had a positive impact. Therefore, we have added a matter for congressional consideration to the report that suggests that the certification requirement be extended. DOD partially concurred with our recommendation that the Secretary of Defense direct that involvement by the Chairman, Joint Chiefs of Staff, occur early enough to affect annual Navy budget submissions. DOD said the Chairman is involved early enough to affect budget decisions. Our recommendation, however, is based on our conclusion that the certification process has not been effective in assuring the adequacy of resources. This conclusion is based, in part, on the late involvement of the Chairman, Joint Chiefs of Staff. For example, we note that the Navy’s fiscal year 1999 budget submission went to Congress in late January 1998, yet the Secretary of Defense’s certification, which includes the Chairman’s determination regarding the sufficiency of the Navy’s resources in fiscal year 1999, was submitted in May 1998. Although the Chairman, Joint Chiefs of Staff, has input in the budget process, the certification requirement provides an additional opportunity to have an effect in assuring the sufficiency of resources. Since DOD only partially concurred and to strengthen the certification process, we have deleted our recommendation regarding the Chairman’s participation and added a matter for congressional consideration that the annual certification requirement be amended to ensure the participation by the Chairman, Joint Chiefs of Staff, before the Navy’s budget is submitted to the Office of the Secretary of Defense. The intent of our matters for consideration is to give additional attention to the sufficiency of budget resources the Navy has devoted to MCM. DOD also provided some updated information in its comments and we have incorporated it into our report as appropriate. To obtain information on the status of Navy plans, programs, and the certification process, we interviewed and obtained documentation from officials of the Office of the Secretary of Defense, the Joint Staff, the Defense Intelligence Agency, the Secretary of the Navy, the Chief of Naval Operations, the Naval Air and Sea Systems Commands, the Office of Naval Intelligence, and the Office of Naval Research in the Washington, D.C., area, and the Navy Operational Test and Evaluation Force and the Surface Warfare Development Group in Norfolk, Virginia. We also interviewed and obtained information from officials engaged in MCM scientific and technical research and development activities at the Naval Undersea Warfare Center in Newport, Rhode Island; the Navy Coastal Systems Station in Panama City, Florida; and the Applied Physics Laboratory of Johns Hopkins University, in Laurel, Maryland. To gain an understanding of existing capabilities and requirements, and an operational perspective, we interviewed and obtained information from the staff and operational units of the Commander in Chief, Atlantic Command and the Commander in Chief, Atlantic Fleet in Norfolk, Virginia; and the Commander, Mine Warfare Command, in Corpus Christi, and Ingleside, Texas. We conducted our review between September 1997 and March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman, Senate Committee on Armed Services; the Chairman, Subcommittee on Defense, Senate Committee on Appropriations; the Chairman, Subcommittee on National Security, House Committee on Appropriations; the Secretaries of Defense, the Army, and the Navy; and the Commandant of the Marine Corps. Copies will also be provided to other interested parties upon request. Please contact me at (202) 512-4841 if you have any questions about this report. The major contributors to this report are listed in appendix IV. Program description: The Remote Minehunting System program develops a new remotely operated mine-hunting system that is capable of detecting and classifying mines. It is intended to provide the surface fleet with an on-board means of finding and avoiding mined waters. The program has a three-fold strategy to develop a new vehicle, upgrade it with state-of-the-art mine-hunting sensors, and provide a supportable, incremental operational contingency system to the fleet during the development process. Platform: Surface combatants. Mine threat: Bottom & moored mines/deep to very shallow water. Program start date: Fiscal year 1993. Date of estimated completion of research & development phase: Fiscal year 2002, milestone III on version 4 (proposed). Current status: Milestone III on version 3 had been scheduled for fiscal year 1999; however, due to cost and schedule problems, the program has been restructured to drop version 3 and continue development of version 4. Funding (fiscal years 1992-97): $44.1 million. Programmed funding (fiscal years 1998-03): $103.7 million. Program description: The Magic Lantern is a helicopter mounted laser/camera system that detects and classifies moored mines. The objective of the Magic Lantern Deployment Contingency system is to field an advanced development model on one detachment of Naval Reserve SH-2G helicopters to provide on-board mine reconnaissance capability for surface and near surface water. In fiscal year 1996, Congress directed a competitive evaluation field test of the Airborne Laser Mine Detection System technologies. These technologies included Magic Lantern, ATD-111, and the Advanced Airborne Hyperspectral Imaging System. This field test took place in late 1997. The Navy expects to send the final report to Congress by the end of April 1998. Platform: SH-2G helicopters. Mine threat: Floating and shallow-water moored mines. Program start date: Fiscal year 1992 (Start of the Airborne Laser Mine Detection System program). Date of estimated completion of research & development phase: Fiscal year 1999. Current status: Installation of contingency systems on H-60 helicopters. Funding (fiscal years 1992-97): $73 million. Programmed funding (fiscal years 1998-03): $29.3 million. Program description: This system is intended to provide an unmanned undersea vehicle mine reconnaissance capability in the form of a single operational prototype, as a stop-gap, interim clandestine offboard system. The system is to be launched and recovered from a SSN-688 class submarine. Platform: SSN-688 class submarines. Mine threat: Bottom and moored mines in deep through very shallow water. Program start date: Fiscal year 1994. Date of estimated completion of research & development phase: Fiscal year 2003. Current status: Initial operational capability is scheduled for fiscal year 1998. The system is scheduled to participate in the Joint Countermine Advanced Concept Technology Demonstration II in June 1998. Funding (fiscal years 1994-97): $42.3 million. Programmed funding (fiscal years 1998-03): $29.6 million. Program description: Radiant Clear is a joint Navy-Marine Corps effort to graphically depict the littoral environment and coastal defenses through the application of advances in the processing of data collected by national systems. Platform: Not applicable. Mine threat: Very shallow water to the beach. Program start date: Fiscal year 1996. Date of estimated completion of research & development phase: Open. Current status: Demonstration, May 1998. Funding (fiscal years 1996-97): $2 million. Programmed funding (fiscal years 1998-03): $6 million. Program description: This system is an explosive line charge system that is delivered from a rocket motor and deployed from a manned Landing Craft, Air Cushion at a standoff range of 200 feet. Platform: Manned Landing Craft, Air Cushion. Mine threat: Very shallow water and surf zone, optimized for 3-10 feet water depth. Program start date: Fiscal year 1992. Date of estimated completion of research & development phase: Fiscal year 1999, milestone III. Current status: Fiscal year 1998, developmental and operational testing. Funding (fiscal years 1992-97): $35.3 million. Programmed funding (fiscal years 1998-03): $10.9 million. Program description: The Distributed Explosive Technology program is a distributed explosive net that is delivered by two rocket motors and deployed from a manned Landing Craft, Air Cushion at a standoff range of 200 feet. It is designed to provide a wide swath of mine clearance in the surf zone. Platform: Manned Landing Craft, Air Cushion. Mine threat: Surf zone, optimized for depths less than 3 feet to the beach. Program start date: Fiscal year 1992. Date of estimated completion of research & development phase: Fiscal year 1999, milestone III. Current status: Fiscal year 1998, developmental and operational testing. Funding (fiscal years 1992-97): $47 million. Programmed funding (fiscal years 1998-03): $19.5 million. Program description: The AQS-20 was to be an airborne towed high speed mine-hunting sonar. It was to work in conjunction with the Airborne Mine Neutralization System. The AQS-20 was to provide the capability to search, detect, localize, and classify mines. Platform: MH-53 helicopters. Mine threat: Bottom, close tethered, and volume mines in deep and shallow water. Program start date: 1978. Date of estimated completion of research & development phase: Fiscal year 2001. Current status: Transitioning to AQS-X, a follow-on advanced sonar with the addition of mine identification capability and towed capability from the H-60 helicopter. An advanced sonar fly-off is planned for fiscal year 1999. Funding (fiscal years 1992-97): $73.1 million. Programmed funding (fiscal years 1998-03): $76.3 million. Program description: This system is a magnetic and acoustic system and is to rapidly sweep and clear influence mines by emulating the signatures of amphibious assault craft. It is to be an on-board mine countermeasures asset and capable of night operations. Platform: Remotely controlled surface craft, but other platforms are being explored. Mine threat: Influence mines in shallow and very shallow water. Program start date: Fiscal year 1993. Date of estimated completion of research & development phase: Fiscal year 2000, scheduled transition from Advanced Technology Demonstration status to acquisition program. Current status: To be a part of the Joint Countermine Advanced Concept Technology Demonstration II in June 1998 (approximate 6 months slippage from original schedule). Funding (fiscal years 1992-97): $49.8 million. Programmed funding (fiscal years 1998-03): $7 million. Program description: This system is an expendable, remotely operated, explosive mine neutralization device that is towed by a helicopter. It is intended to rapidly destroy mines and operate in day or night. Originally, it was intended to operate in conjunction with the AQS-20 sonar. With the termination of the AQS-20 and transition to AQS-X, the system will operate with the AQS-14A sonar, which will be integrated with a laser line scan system to provide interim mine identification capability. Platform: MH-53 helicopters. Mine threat: Bottom and moored mines in deep or shallow water. Program start date: Fiscal year 1975. Date of estimated completion of research & development phase: Fiscal year 2000, milestone III is scheduled. Current status: Engineering, manufacturing, and development contract award scheduled for second quarter, fiscal year 1998. Funding (fiscal years 1992-97): $12.4 million. Programmed funding (fiscal years 1998-00): $22.6 million. Program description: This system is an advanced technology demonstration program and is intended to employ laser targeting and supercavitating projectiles to neutralize near surface moored contact mines. Its objective is to provide fast reacting organic helicopter capability to safely and rapidly clear mines. Platform: Helicopter. Mine threat: Near surface moored contact mines. Program start date: Fiscal year 1998. Date of estimated completion of research & development phase: Fiscal year 2004. Current status: Fiscal year 1998, demonstration of lethality against key mine types. Programmed funding (fiscal years 1998-04): $65 million. Program description: The Explosive Neutralization Advanced Technology Demonstration, as a group of four subsystems, is intended to demonstrate the capability to neutralize anti-invasion mines in the surf zone and craft landing zone. Two of the subsystems will consist of line charges and surf zone array, which are to be launched from an air cushion vehicle and propelled by new rocket motors for extended range and increased stand-off. These two subsystems will also have a third subsystem, a fire control system, for accurate placement of explosives. The fourth subsystem, the beach zone array, will consist of a glider and an array system. The glider, an unmanned, unpowered air vehicle, will be released by an air deployment vehicle. The glider will approach the beach by means of a global positioning system guidance and control system. To detonate and clear mines, it will deploy the array of nylon webbing and shaped charges over a predesignated target. Platform: Unmanned air vehicle. Mine threat: Anti-invasion mines in the surf and craft landing zones. Program start date: Fiscal year 1993. Date of estimated completion of research & development phase: Fiscal year 2005 for the line charges, surf zone array, and fire control system and fiscal year 2009 for the beach zone array. Current status: Demonstration of fieldable prototype of the beach zone array scheduled for fiscal year 1998. Funding (fiscal years 1993-97): $63.7 million. Programmed funding (fiscal years 1998-03): $87.8 million. Two examples of mine warfare programs that have been in the research and development phase for many years without advancing to procurement are the AQS-20, a mine-hunting sonar, and the Airborne Mine Neutralization System. The following tables illustrate the changes, including the recent series of internal Department of Defense (DOD) increases and decreases, to these programs’ funding. The changes depicted in table II. 1 resulted in a delay in the AQS-20 schedule. The production decision slipped 1 year, from second quarter fiscal year 1998 to second quarter fiscal year 1999. (Dollars in thousands) As presented in the fiscal year 1996 President’s budget $218 (actual) $12,791 (estimated appropriation) $20,123 (estimate) Reprogramming from Airborne Laser Mine Detection System Reinitiate Airborne Mine Neutralization System Realignment to Shallow Water Mine Countermeasures program element Realignment to Remote Minehunting System Total, as presented in the fiscal year 1997 President’s budget $9,165 (adjusted actual) $12,390 (adjusted appropriation) $13,164 (revised estimate) The changes depicted in table II.2 resulted in delays in the schedules of both the AQS-20 and the Airborne Mine Neutralization System. The production decision for the AQS-20 slipped an additional 6 months, to the fourth quarter fiscal year 1999. The production decision for the Airborne Mine Neutralization System slipped 1 year, from third quarter fiscal year 1999 to third quarter fiscal year 2000 due to funding constraints. Table II.2: AQS-20 and Airborne Mine Neutralization System Funding Profile, as of February 1997 $12,355 (actual) $13,164 (revised estimate) $13,069 (estimate) $5,694 (estimate) $11,974 (adjusted actual) $18,357 (adjusted appropriation) $16,503 (revised estimate) $19,937 (revised estimate) The changes depicted in table II.3 reflect the addition of two new initiatives, the Configuration Theory Tactical Decision Aid and the Shallow Water Influence Minesweep System. Congress increased the fiscal year 1998 budget request by $2 million for the Shallow-Water Influence Minesweep System program. (Dollars in thousands) As presented in the fiscal year 1998-99 President’s budget $18,357 (actual) $16,503 (estimated appropriation) $19,937 (estimate) Small Business Innovative Research assessment Configuration Theory Tactical Decision Aid Total, as presented in the fiscal year 1999 President’s budget $17,969 (adjusted actual) $17,905 (adjusted appropriation) $20,054 (estimate) Anton G. Blieberger, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's mine countermeasures efforts, focusing on the: (1) Navy's plans for improving mine countermeasures (MCM) capabilities; (2) status of current research, development, test, and evaluation (RDT&E) programs; and (3) process the Department of Defense (DOD) used to prepare the annual certification required by Public Law 102-190. GAO noted that: (1) the Navy has not decided on the mix of on-board and special purpose forces it wants to maintain in the future and committed the funding needed for developing and sustaining those capabilities; (2) this decision will determine the types and quantities of systems to be developed and their priority; (3) it also affects the schedule and cost of those developments and the design and cost of the platforms on which they will operate; (4) a final force structure decision will likely be determined by the level of resources the Navy decides to dedicate to the MCM mission in the future; (5) a few systems are scheduled for production decisions within the next 2 to 3 years, while other systems were not produced because the Navy never funded their procurement; (6) since 1992, the Navy has spent about $1.2 billion in RDT&E funds to improve its mine warfare capabilities; (7) however, this investment has not produced any systems that are ready to transition to production; (8) delaying factors include funding instability, changing requirements, cost growth, and unanticipated technical problems; (9) the Navy plans to spend an additional $1.5 billion for RDT&E over the next 6 years; (10) most officials interviewed said the annual certification process has served to increase the visibility of MCM requirements within DOD and the Navy, with positive results and should continue to be required; (11) however, as currently conducted, the annual certification process does not address the adequacy of overall resources for this mission, nor does it contain any measures against which the Navy's progress in enhancing its MCM capabilities can be evaluated; (12) the Chairman, Joint Chiefs of Staffs' review for resource sufficiency occurs after the Navy's budget proposals for its MCM program have been formalized; and (13) the review does not affect specific Navy MCM acquisition programs or overall MCM resource decisions.
The use of computer technology in schools has grown dramatically in the past several years. Surveys conducted by one marketing research firmestimated that in 1983 schools had 1 computer for every 125 students; in 1997, the ratio had increased to 1 computer for every 9 students. Meanwhile, many education technology experts believe that current levels of school technology do not give students enough access to realize technology’s full potential. For example, schools should have a ratio of four to five students for every computer or five students for every multimedia computer, many studies suggest. In addition, concern has been expressed that aging school computers may not be able to run newer computer programs, use multimedia technology, and access the Internet. A computer-based education technology program has many components, as figure 1 shows, which range from the computer hardware and software to the maintenance and technical support needed to keep the system running. Although technology programs may define the components differently, they generally cover the same combination of equipment and support elements. Computer-based technology can be used to augment learning in a number of ways. These include drill-and-practice programs to improve basic skills; programs providing students with the tools to write and produce multimedia projects that combine text, sound, graphics, and video; programs providing access to information resources, such as on the Internet; and networks that support collaborative and active learning. Research on school technology has not, however, provided clear and comprehensive conclusions about its impact on student achievement. Although some studies have shown measurable improvements in some areas, less research data exist on the impact of the more complex uses of technology. Our work focused on funding for school technology. We did not evaluate district goals or accomplishments or assess the value of technology in education. Each of the districts we visited used a combination of funding sources to support technology in its schools (see table 1). At the local level, districts allocated funds from their district operating budgets, levied special taxes, or both. Districts also obtained funds from federal and state programs specifically designated to support school technology or from federal and state programs that could be used for this and other purposes. Finally, districts obtained private grants and solicited contributions from businesses. Although some individual schools in the districts we visited raised some funds, obtaining technology funding was more a district-level function than a school-level function, according to our study. Although districts tapped many sources, nearly all of them obtained the majority of their funding from one main source. The source, however, varied by district. For example, in Seattle, a 1991 local capital levy has provided the majority of the district’s education technology funding to date. In Gahanna, the district operating budget has provided the majority of technology funding. All five districts chose to allocate funds for technology from their operating budgets. The portions allocated ranged widely from 16 to 77 percent of their total technology funding. Two districts—Seattle and Roswell—also raised significant portions of their technology funding using local bonds or special levies. Manchester and Seattle won highly competitive 5-year Technology Innovation Challenge Grants for $2.8 million and $7 million, respectively. The grant provided the major source of funding for Manchester’s technology program—about 66 percent of the funding. The $1.5 million in grant funding Seattle has received so far accounted for about 4 percent of the district’s technology funding. All five districts reported using federal and state program funding that was not specifically designated for technology but could be used for this purpose if it fulfilled program goals. For example, four districts reported using federal title I funds for technology. In Manchester, a schoolwide program at a title I elementary school we visited had funded many of its 27 computers as part of its title I program. Three districts used state program funds, such as textbook or instructional materials funds, to support their technology programs. In Davidson County, for example, the district has directed about $2 million in such funds, including those for exceptional and at-risk children as well as vocational education, to education technology. All districts received assistance, such as grants and monetary and in-kind donations, from businesses, foundations, and individuals. Such funding constituted about 3 percent or less of their technology funding. It is important to note, however, that our selection criteria excluded districts that had benefited from extraordinary assistance such as those receiving the majority of their funding from a company or individual. Officials we spoke with attributed the limited business contributions in their districts to a variety of reasons, including businesses not fully understanding the extent of the schools’ needs and businesses feeling overburdened by the large number of requests from the community for assistance. Some said their district simply had few businesses from which to solicit help. Nonetheless, all five districts noted the importance of business’ contribution and were cultivating their ties with business. teacher organization activities and other school fund-raisers. Such supplemental funding amounted to generally less than $7,000 annually but did range as high as $84,000 over 4 years at one school. Staff at two schools reported that teachers and other staff used their personal funds to support technology in amounts ranging from $100 to over $1,000. Officials in the districts we visited identified a variety of barriers to obtaining technology funding. Four types of barriers were common to most districts and considered by some to be especially significant. (See table 2.) Officials in all of the districts we visited reported that district-level funding was difficult to obtain for technology because it was just one of many important needs that competed for limited district resources. For example, a Gahanna official reported that his district’s student population had grown, and the district needed to hire more teachers. A Seattle official reported that his district had $275 million in deferred maintenance needs. Some districts had mandates to meet certain needs before making funding available for other expenditures like technology. Manchester officials noted, for example, that required special education spending constituted 26 percent of their 1997 district operating budget, a figure expected to rise to 27.5 percent in fiscal year 1998. Officials from all districts said that resistance to higher taxes affected their ability to increase district operating revenue to help meet their technology goals. For example, in Davidson County, the local property tax rate is among the lowest in the state, and officials reported that many county residents were attracted to the area because of the tax rates. In addition, two districts—Roswell and Seattle—did not have the ability to increase the local portion of their operating budgets because of state school finance systems that—to improve equity—limited the amount of funds districts could raise locally. Officials in three districts reported that the antitax sentiment also affected their ability to pass special technology levies and bond measures. Although all districts identified an environment of tax resistance in their communities, most said they believed the community generally supported education. Many officials reported that they did not have the time to search for technology funding in addition to performing their other job responsibilities. They said that they need considerable time to develop funding proposals or apply for grants. For example, one technology director with previous grant-writing experience said she would need an uninterrupted month to submit a good application for a Department of Commerce telecommunications infrastructure grant. As a result, she did not apply for this grant. The technology director in Manchester said that when the district applied for a Technology Innovation Challenge Grant, two district staff had to drop all other duties to complete the application within the 4-week time frame available. corporations and foundations typically like to give funds to schools where they can make a dramatic difference. Districts have employed general strategies to overcome funding barriers rather than address specific barriers. The strategies have involved two main approaches—efforts to inform decisionmakers about the importance of and need for technology and leadership efforts to secure support for technology initiatives. In their information efforts, district officials have addressed a broad range of audiences about the importance of and need for technology. These audiences have included school board members, city council representatives, service group members, parents, community taxpayers, and state officials. These presentations have included technology demonstrations, parent information nights, lobbying efforts with state officials, and grassroots efforts to encourage voter participation in levy or bond elections. Roswell, for example, set up a model technology school and used it to demonstrate the use of technology in school classrooms. In the districts we visited, both district officials and the business community provided leadership to support school technology. In all districts, district technology directors played a central leadership role in envisioning, funding, and implementing their respective technology programs over multiyear periods and continued to be consulted for expertise and guidance. In some districts, the superintendent also assumed a role in garnering support and funding for the technology program. Beyond the district office, business community members sometimes assumed leadership roles to support technology by entering into partnerships with the districts to help in technology development efforts as well as in obtaining funding. All five districts we visited had developed such partnerships with local businesses. In Roswell and Seattle, education foundations comprising business community leaders had helped their school districts’ efforts to plan and implement technology, providing both leadership and funding for technology. Other districts we visited continued to cultivate their ties with the business community through organizations such as a business advisory council and a community consortium. Nearly all districts reported maintenance, technical support, and training— components often dependent on staff—as more difficult to fund than other components. Officials we interviewed cited several limitations associated with funding sources that affected their use for staff costs. First, some sources simply could not be used to pay for staff. Officials in Roswell and Seattle noted that special levy and bond monies, their main sources of technology funds, could not be used to support staff because the funds were restricted to capital expenditures. Second, some funding sources do not suit the ongoing nature of staff costs. Officials noted, for example, that grants and other sources provided for a limited time or that fluctuate from year to year are not suited to supporting staff. Most districts funded technology staff primarily from district operating budgets. Several officials noted that competing needs and the limited size of district budgets make it difficult to increase technology staff positions. Officials in all five districts reported having fewer staff than needed. Some technology directors and trainers reported performing maintenance or technical support at the expense of their other duties because of a lack of sufficient support staff. One result was lengthy periods—up to 2 weeks in some cases—when computers and other equipment were unavailable. Several officials observed that this can be frustrating to teachers and discourage them from using the equipment. Teacher training was also affected by limited funding for staff costs, according to officials. In one district, for example, an official said that the number of district trainers was insufficient to provide the desired in-depth training to all teachers. Most district officials expressed a desire for more technology training capability, noting that teacher training promoted the most effective use of the equipment. A number of districts had developed mitigating approaches to a lack of technology support staff. These included purchasing extended warranties on new equipment, training students to provide technical support in their schools, and designating teachers to help with technical support and training. and (2) periodic costs of upgrading and replacing hardware, software, and infrastructure to sustain programs. Most districts planned to continue funding ongoing maintenance, technical support, training, and telecommunications costs primarily from their operating budgets and to sustain at least current levels of support. Nonetheless, most districts believed that current levels of maintenance and technical support were not adequate and that demand for staff would likely grow. Some officials talked about hiring staff in small increments but were unsure to what extent future district budgets would support this growing need. The periodic costs to upgrade and replace hardware, software, or infrastructure can be substantial, and most districts faced uncertainty in continuing to fund them with current sources. For example, Davidson County and Gahanna funded significant portions of their hardware with state technology funding. However, officials told us that in the past, the level of state technology funding had been significantly reduced due to the changing priorities of their state legislatures. In Seattle, special levies are the district’s primary funding source, but passing these initiatives is unpredictable. Officials in all districts underscored the need for stable funding sources and for technology to be considered a basic education expenditure rather than an added expense. They also suggested ways to accomplish this. Some proposed including a line item in the district operating budget to demonstrate district commitment to technology as well as provide a more stable funding source. One official said that technology is increasingly considered part of basic education and as such should be included in the state’s formula funding. Without such funding, he said districts would be divided into those that could “sell” technology to voters and those that could not. technology supporters in the districts we studied not only had to garner support at the start for the district’s technology, but they also had to continue making that case year after year. To develop support for technology, leaders in these five school districts used a broad informational approach to educate the community, and they formed local partnerships with business. Each district has developed some ties with business. Nonetheless, funding from private sources, including business, for each district, constituted no more than about 3 percent of what the district has spent on its technology program. Other districts like these may need to continue depending mainly on special local bonds and levies, state assistance, and federal grants for initially buying and replacing equipment and on their operating budgets for other technology needs. Lack of staff for seeking and applying for funding and the difficulty of funding technology support staff were major concerns of officials in all the districts we studied. Too few staff to maintain equipment and support technology users in the schools could lead to extensive computer downtime, teacher frustration, and, ultimately, to reduced use of a significant technology investment. The technology program in each of the five districts we visited had not yet secured a clearly defined and relatively stable funding source, such as a line item in the operating budget or a part of the state’s education funding formula. As a result, district officials for the foreseeable future will continue trying to piece together funding from various sources to maintain their technology programs and keep them viable. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Task Force may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed how school districts obtain funds for the acquisition of education technology, focusing on: (1) sources of funding school districts have used to develop and fund their technology programs; (2) barriers districts have faced in funding the technology goals they set, and how they attempted to deal with these barriers; (3) components of districts' technology programs that have been the most difficult to fund, and what the consequences have been; and (4) districts' plans to deal with the ongoing costs of the technology they have acquired. GAO noted that: (1) the five districts it studied used a variety of ways to fund their technology programs; (2) four types of barriers seemed to be common to several districts: (a) technology was just one of a number of competing needs and priorities, such as upkeep of school buildings; (b) local community resistance to higher taxes limited districts' ability to raise more revenue; (c) officials said they did not have enough staff for fund-raising efforts and therefore had difficulty obtaining grants and funding from other sources such as business; and (d) some funding sources had restrictive conditions or requirements that made funding difficult to obtain; (3) to overcome these barriers, officials reported that their districts used a variety of methods to educate and inform the school board and the community about the value of technology; (4) these ranged from presentations to parent groups to the establishment of a model program at one school to showcase the value of technology; (5) the parts of the technology program that were hardest to fund, according to those GAO interviewed, were components such as maintenance, training, and technical support, which depend heavily on staff positions; (6) for example, in two locations special levy and bond funding could be used only for capital expenditures--not for staff; (7) in several districts GAO visited, officials said that staffing shortfalls in maintenance and technical support had resulted in large workloads for existing staff and in maintenance backlogs; (8) most said this resulted in reduced computer use because computers were out of service; and (9) as these districts looked to the future to support the ongoing and periodic costs of their technology programs, they typically planned to continue using a variety of funding sources despite uncertainties associated with many of these sources.
According to EPA, perchlorate can interfere with the normal functioning of the thyroid gland by competitively inhibiting the transport of iodide into the thyroid, which can then affect production of thyroid hormones. The fetus depends on an adequate supply of maternal thyroid hormone for its central nervous system development during the first trimester of pregnancy. The National Academy of Sciences reported that inhibition of iodide uptake from low-level perchlorate exposure may increase the risk of neurodevelopmental impairment in fetuses of high-risk mothers— pregnant women who might have iodine deficiency or hypothyroidism (reduced thyroid functioning). The Academy recognized the differences in sensitivity to perchlorate exposure between the healthy adults used in some studies and the most sensitive population and the fetuses of these high-risk mothers. Consequently, the Academy included a 10-fold uncertainty factor in its recommended reference dose to protect these sensitive populations. The Academy also called for additional research to help determine what effects low-level perchlorate exposure may have on children and pregnant women. EPA has issued drinking water regulations for more than 90 contaminants. The Safe Drinking Water Act, as amended in 1996, requires EPA to make regulatory determinations on at least five unregulated contaminants and decide whether to regulate these contaminants with a national primary drinking water regulation. The act requires that these determinations be made every five years. The unregulated contaminants are typically chosen from a list known as the Contaminant Candidate List (CCL), which the act also requires EPA to publish every five years. EPA published the second CCL on February 24, 2005. On April 11, 2007, EPA announced its preliminary determination not to regulate 11 of the contaminants on this list. The agency also announced that it was not making a regulatory determination for perchlorate because EPA believed that additional information may be needed to more fully characterize perchlorate exposure and determine whether regulating perchlorate in drinking water presents a meaningful opportunity for health risk reduction. Several federal environmental laws provide EPA and states authorized by EPA with broad authorities to respond to actual or threatened releases of substances that may endanger public health or the environment. For example, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended, authorizes EPA to investigate the release of any hazardous substance, pollutant, or contaminant. The Resource Conservation and Recovery Act of 1976 (RCRA) gives EPA authority to order a cleanup of hazardous waste when there is an imminent and substantial endangerment to public health or the environment, and one federal court has ruled that perchlorate is a hazardous waste under RCRA. The Clean Water Act’s National Pollutant Discharge Elimination System (NPDES) provisions authorize EPA, which may, in turn, authorize states, to regulate the discharge of pollutants into waters of the United States. These pollutants may include contaminants such as perchlorate. The Safe Drinking Water Act authorizes EPA to respond to actual or threatened releases of contaminants into public water systems or underground sources of drinking water, regardless of whether the contaminant is regulated or unregulated, where there is an imminent and substantial endangerment to health and the appropriate state and local governments have not taken appropriate actions. Under certain environmental laws such as RCRA, EPA can authorize states to implement the requirements as long as the state programs are at least equivalent to the federal program and provide for adequate enforcement. In addition, some states have their own environmental and water quality laws that provide state and local agencies with the authority to monitor, sample, and require cleanup of various regulated and unregulated hazardous substances that pose an imminent and substantial danger to public health. For example, the California Water Code authorizes Regional Water Control Boards to require sampling of waste discharges and to direct cleanup and abatement, if necessary, of any threat to water, including the release of an unregulated contaminant such as perchlorate. Finally, according to EPA and state officials, at least 9 states have established nonregulatory action levels or perchlorate advisories, ranging from under 1 part per billion to 18 parts per billion, under which responsible parties have been required to sample and clean up perchlorate. For example, according to California officials, the state of California has a public health goal for perchlorate of 6 parts per billion and has used the goal to require cleanup at one site. Because information on the extent of perchlorate contamination was not readily available, we thoroughly reviewed available perchlorate sampling reports and discussed them with federal and state environmental officials. We identified 395 sites in 35 states, the District of Columbia, and 2 commonwealths of the United States where perchlorate has been found in drinking water, groundwater, surface water, sediment, or soil. The perchlorate concentrations ranged from the minimum reporting level of 4 parts per billion to in more than 3.7 million parts per billion—a level found in groundwater at one of the sites. Roughly one-half of the contaminated sites were found in Texas (118) and California (106), where both states conducted broad investigations to determine the extent of perchlorate contamination. As shown in figure 1, the highest perchlorate concentrations were found in five states—Arkansas, California, Nevada, Texas, and Utah—where, collectively, 11 sites had concentrations exceeding 500,000 parts per billion. However, most of the 395 sites did not have such high levels of contamination. We found 271 sites where the concentration was less than 24.5 parts per billion, the drinking water concentration equivalent calculated on the basis of EPA’s reference dose. According to EPA and state agency officials, the greatest known source of contamination was defense and aerospace activities. As shown in figure 2, our analysis found that, at 110 of the 395 sites, the perchlorate source was related to propellant manufacturing, rocket motor testing firing, and explosives testing and disposal at DOD, NASA, and defense-related industries. Officials said the source of the contamination at another 58 sites was agriculture, a variety of other commercial activities such as fireworks and flare manufacturing, and perchlorate manufacturing and handling. At the remaining sites, state agency officials said the source of the perchlorate was either undetermined (122 sites) or naturally occurring (105 sites). Further, all 105 sites with naturally occurring perchlorate are located in the Texas high plains region where perchlorate concentrations range from 4 to 59 parts per billion. Of the sites we identified, 153 were public drinking water systems. The Safe Drinking Water Act’s Unregulated Contaminant Monitoring Regulation required sampling of public drinking water systems for a 12- month period between 2001 and 2003. . As of January 2005, 153 (about 4 percent) of 3,722 systems that were sampled and reported reported finding perchlorate to EPA. Located across 26 states and 2 commonwealths, these 153 sites accounted for more than one-third of the sites we identified where perchlorate concentrations reported ranged from 4 parts per billion to 420 parts per billion but averaged less than 10 parts per billion. Only 14 of the 153 public drinking water systems had concentration levels above 24.5 parts per billion, the drinking water equivalent calculated on the basis of EPA’s revised perchlorate reference dose. California had the most public water systems with perchlorate, where 58 systems reported finding perchlorate in drinking water. The highest drinking water perchlorate concentration of 420 parts per billion was found in Puerto Rico in 2002. Subsequent sampling in Puerto Rico did not find any perchlorate, and officials said the source of the initial finding was undetermined. These 153 public drinking water systems that found perchlorate serve populated areas, and an EPA official estimated that as many as 10 million people may have been exposed to the chemical. EPA officials told us they do not know the source of most of the contamination found in public drinking water systems, but that 32 systems in Arizona, California, and Nevada were likely due to previous perchlorate manufacturing at a Kerr McGee Chemical Company site in Henderson, Nevada. Regional EPA and state officials told us they did not plan to clean up perchlorate found at public drinking water sites until EPA establishes a drinking water standard for perchlorate. In some cases, officials did not plan to clean up because subsequent sampling was unable to confirm that perchlorate was present. EPA officials said the agency does not centrally track or monitor perchlorate detections or the status of cleanup activities. As a result, it is difficult to determine the extent of perchlorate contamination in the U.S. EPA maintains a list of sites where cleanup or other response actions are underway but the list does not include sites not reported to EPA. As a result, EPA officials said they did not always know whether other federal and state agencies found perchlorate because, as is generally the case with unregulated contaminants, there is no requirement for states or other federal agencies to routinely report perchlorate findings to EPA. For example, DOD is not required to report to EPA when perchlorate is found on active installations and facilities. Consequently, EPA region officials in California said they did not know the Navy found perchlorate at the Naval Air Weapons Station at China Lake because the Navy did not report the finding to EPA. Further, states are not required to routinely notify EPA about perchlorate contamination they discover. For example, EPA region officials in California said the Nevada state agency did not tell them perchlorate was found at Rocketdyne, an aerospace facility in Reno, or that it was being cleaned up. EPA only learned about the perchlorate contamination when the facility’s RCRA permit was renewed. In our May 2005 review, we conducted a literature search for studies of perchlorate health risks published from 1998 to 2005 and identified 125 studies on perchlorate and the thyroid. After interviewing DOD and EPA officials about which studies they considered important in assessing perchlorate health risks, we reviewed 90 that were relevant to our work. The findings of 26 of these studies indicated that perchlorate had an adverse effect on thyroid function and human health. In January 2005, the National Academy of Sciences considered many of these same studies and concluded that the studies did not support a clear link between perchlorate exposure and changes in the thyroid function or thyroid cancer in adults. Consequently, the Academy recommended additional research into the effect of perchlorate exposure on children and pregnant women but did not recommend a drinking water standard. DOD, EPA, and industry sponsored the majority of the 90 health studies we reviewed; the remaining studies were conducted by academic researchers and other federal agencies. Of these 90 studies, 49 were experiments that sought to determine the effects of perchlorate on humans, mammals, fish, and/or amphibians by exposing these groups to different doses of perchlorate over varied time periods and comparing the results with other groups that were not exposed. Twelve were field studies that compared humans, mammals, fish, and/or amphibians in areas known to be contaminated with the same groups in areas known to be uncontaminated. Both types of studies have limitations: the experimental studies were generally short in duration, and the field studies were generally limited by the researchers’ inability to control whether, how much, or how long the population in the contaminated areas was exposed. For another 29 studies, researchers reviewed several publicly available human and animal studies and used data derived from these studies to determine the process by which perchlorate affects the human thyroid and the highest exposure levels that did not adversely affect humans. The 3 remaining studies used another methodology. Many of the studies we reviewed contained only research findings, rather than conclusions or observations on the health effects of perchlorate. Appendix III from our 2005 report provides data on these studies, including who sponsored them; what methodologies were used; and, where presented, the author’s conclusions or findings on the effects of perchlorate. Only 44 of the studies we reviewed had conclusions on whether perchlorate had an adverse effect. However, adverse effects of perchlorate on the adult thyroid are difficult to evaluate because they may happen over longer time periods than can be observed in a typical research study. Moreover, different studies used the same perchlorate dose amount but observed different effects, which were attributed to variables such as the study design type or age of the subjects. Such unresolved questions were one of the bases for the differing conclusions in EPA, DOD, and academic studies on perchlorate dose amounts and effects. The adverse effects of perchlorate on development can be more easily studied and measured within typical study time frames. Of the studies we reviewed, 29 evaluated the effect of perchlorate on development, and 18 of these found adverse effects resulting from maternal exposure to perchlorate. According to EPA officials, the most sensitive population for perchlorate exposure is the fetus of a pregnant woman who is also nearly iodine-deficient. However, none of the 90 studies that we reviewed considered this population. Some studies reviewed the effect on the thyroid of pregnant rats, but we did not find any studies that considered perchlorate’s effect on the thyroid of nearly iodine-deficient pregnant rats. In January 2005, the National Academy of Sciences issued its report on EPA’s draft health assessment and the potential health effects of perchlorate. The Academy reported that although perchlorate affects thyroid functioning, there was not enough evidence to show that perchlorate causes adverse effects at the levels found in most environmental samples. Most of the studies that the Academy reviewed were field studies, the report said, which are limited because they cannot control whether, how much, or how long a population in a contaminated area is exposed. The Academy concluded that the studies did not support a clear link between perchlorate exposure and changes in the thyroid function in newborns and hypothyroidism or thyroid cancer in adults. In its report, the Academy noted that only 1 study examined the relationship between perchlorate exposure and adverse effects on children, and that no studies investigated the relationship between perchlorate exposure and adverse effects on vulnerable groups, such as low-birth-weight infants. The Academy concluded that an exposure level higher than initially recommended by EPA may not adversely affect a healthy adult. The Academy recommended that additional research be conducted on perchlorate exposure and its effect on children and pregnant women but did not recommend that EPA establish a drinking water standard. To address these issues, in October 2006, CDC researchers published the results of the first large study to examine the relationship between low- level perchlorate exposure and thyroid function in women with lower iodine levels. About 36 percent of U.S. women have these lower iodine levels. The study found decreases in a thyroid hormone that helps regulate the body’s metabolism and is needed for proper fetal neural development in pregnant women. Mr. Chairman, this concludes my testimony. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this presentation, please contact me, John Stephenson, at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Steven Elstein, Assistant Director, and Terrance Horner, Senior Analyst; Richard Johnson, Alison O’Neill, Kathleen Robertson, and Joe Thompson also made key contributions. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Perchlorate has been used for decades by the Department of Defense, the National Aeronautics and Space Administration, and the defense industry in manufacturing, testing, and firing missiles and rockets. Other uses include fireworks, fertilizers, and explosives. Perchlorate is readily dissolved and transported in water and has been found in groundwater, surface water, and soil across the country. Perchlorate emerged as a contaminant of concern because health studies have shown that it can affect the thyroid gland, which helps regulate the body's metabolism, and may cause developmental impairment in fetuses of pregnant women. In 2005, EPA set a reference dose of 24.5 parts per billion (ppb)--the exposure level not expected to cause adverse effect in humans. Today's testimony updates GAO's May 2005 report, Perchlorate: A System to Track Sampling and Cleanup Results is Needed, GAO-05-462 . It summarizes GAO's (1) compilation of the extent of perchlorate contamination in the U.S. and (2) review of peer-reviewed studies about perchlorate's health risks. GAO's 2005 report recommended that EPA work to track and monitor perchlorate detections and cleanup efforts. In December 2006, EPA reiterated its disagreement with this recommendation. GAO continues to believe such a system would better inform the public and others about perchlorate's presence in their communities. Perchlorate has been found at 395 sites in the U.S.--including 153 public drinking water systems--in concentrations ranging from 4 ppb to more than 3.7 million ppb. More than half the sites are in California and Texas, with the highest concentrations found in Arkansas, California, Texas, Nevada, and Utah. About 28 percent of sites were contaminated by defense and aerospace activities related to propellant manufacturing, rocket motor research and test firing, or explosives disposal. Federal and state agencies are not required to routinely report perchlorate findings to EPA, which does not track or monitor perchlorate detections or cleanup status. EPA recently decided not to regulate perchlorate in drinking water supplies pending further study. GAO reviewed 90 studies of health risks from perchlorate published from 1998 to 2005, and one-quarter indicated that perchlorate had an adverse effect on human health, and thyroid function in particular. In January 2005, the National Academy of Sciences also reviewed several studies and concluded that they did not support a clear link between perchlorate exposure and changes in the thyroid function. The academy did not recommend a drinking water standard but recommended additional research into the effect of perchlorate exposure on children and pregnant women. More recently, a large study by CDC scientists has identified adverse thyroid effects from perchlorate in women with low iodine levels that are found in about 36 percent of U.S. women.
Human factors is a discipline concerned with, among other things, designing products that are efficient for people to use. As such, human factors combines features of many disciplines, including psychology, engineering, anthropology, sociology, and linguistics. Human factors R&D focuses on people as they interact with the design of products. The goal of human factors is to minimize the potential for design-induced error by ensuring that the equipment is suitable for the users and their environment. The human factors discipline can be described as having two components: human factors research, which seeks to acquire information, and human factors engineering, which seeks to apply the information gained from research to equipment, systems, software, and training, among other things. Recognizing the importance of human factors considerations, FAA issued a Human Factors Policy Order in 1993 that requires human factors issues to be integrated into the planning and execution of all FAA activities associated with system acquisitions and operations. FAA offers several guidance documents on implementing human factors considerations, which, FAA officials told us, helped aviation stakeholders, such as contractors and research institutions, meet the requirement. For example, officials with the MITRE Corporation told us that—in collaboration with FAA, airlines, and others—they researched human factors issues in the development of the Automatic Dependent Surveillance-Broadcast System, which is an information-reporting technology that, when used in conjunction with other navigation technologies, is expected to enable more precise information about aircraft position. MITRE collected human factors data on how pilots use the broadcast system, collaborated with human factors engineers, and asked human factors personnel to observe pilots’ in-flight interaction with the system while it was being tested. FAA has several offices that are tasked with ensuring that FAA programs integrate human factors issues. FAA’s Human Factors Research and Engineering Group (HFREG) is responsible for conducting the human factors R&D for NextGen, with the program director serving as the principal advisor to the FAA Administrator on human factors issues. HFREG is divided into three R&D areas: (1) Flight Deck/Aviation Maintenance/System Integration, which develops human performance information that the agency uses in fulfilling its regulatory responsibility and provides to the aviation industry for use in designing and operating aircraft and training pilots and maintenance personnel; (2) Air Traffic Control/Technical Operations, which researches human factors issues with respect to the roles of air traffic controllers, air traffic managers, and maintenance technicians; and (3) general Human Factors Research and Engineering, which attempts to ensure that the incorporation of human factors engineering is explicit, timely, systematic, comprehensive, efficient, and effective. In fiscal year 2009, HFREG conducted dozens of R&D activities including the following: Mitigating fatigue in flight operations. Collecting data on fatigue variables (such as sleep patterns, alertness, and mood) to develop better fatigue-mitigating duty and rest schedules, and outline limits of acceptable performance and flight safety. Improving pilots’ visual approaches through perceptual training. Investigating the skills pilots need in order to effectively conduct a visual approach, and developing training and performance metrics that will improve training and evaluation of pilots on visual approach tasks. Assessing safety risks. Calculating the safety risks of an error occurring in relation to the amount of time a controller spends on a task. In addition, FAA has assigned human factors experts to several offices involved in the development of new systems and in the oversight of aircraft operation and maintenance in order to ensure that human factors issues are addressed. FAA has established chief systems engineers to focus on agencywide, cross-cutting technical and operational issues pertaining to NextGen. Because of the scope of NextGen, FAA contracted with Volpe to provide a chief system engineer for human factors to identify and help the agency better ensure that human factors issues are integrated into the development of NextGen aviation systems. As a result of the observations and recommendations of that Volpe expert, FAA has designated a new position for human factors integration lead and assigned that position to FAA’s System Engineering and Safety organization. NASA has two units primarily responsible for ensuring human factors consideration in aviation: the Airspace Systems Program and the Aviation Safety Program, both within its Aeronautics Research Mission Directorate. The Airspace Systems Program is the unit chiefly responsible for NASA’s input into NextGen. The primary research role for the Airspace Systems Program is to contribute to the operations of the airspace system by developing concepts, capabilities, and technologies for high-capacity, efficient, and safe airspace systems. The Aviation Safety Program is dedicated to improving the safety of current and future aircraft operating in the national airspace system. The research focus is on the way aircraft are designed, built, operated, and maintained. Scientists and engineers in this program develop concepts and tools to address aircraft aging and durability, among other areas. FAA and NASA have each invested about $121 million in human factors R&D from fiscal year 2004 to fiscal year 2009 (see fig. 1). Starting in fiscal year 2005, NASA adjusted the size of its human factors research staff by reassigning some staff to other programs and reducing the contractor and academic technical support for human factors R&D. NASA reorganized its aeronautical research plan to focus on what it calls “fundamental research,” which takes a technology to a point where it can be further matured by manufacturers and eventually integrated into new aircraft or engine designs. FAA’s investment in human factors R&D is increasing, along with additional appropriations for overall research development, though overall R&D appears to be increasing at a higher rate (see fig. 2). NASA takes the lead in both identifying human factors concepts that need to be implemented to support a particular technology or system and developing the human factors engineering models and algorithms. NASA then works with FAA on testing the new concept and hands off the responsibility to FAA to make the concept operational. NASA officials told us that it generally takes a concept 5 to 7 years to become operational after NASA transfers responsibility to FAA. Furthermore, in June 2010, NASA officials informed us of a new Integrated Systems Research Program that is to focus on maturing and integrating NextGen technologies into operational systems. The program began in fiscal year 2010 at a funding level of $62.4 million. NextGen is a major transformation of the aviation system that will have significant implications for human factors considerations. NextGen will transform aviation procedures and the design of the aviation system and introduce new technologies that pose dramatic changes to the roles and responsibilities of both air traffic controllers and pilots and change the way they interface with their systems. According to FAA, under NextGen, a satellite-based system would guide all phases of a flight, including climb, cruise, descent, and taxi. Instead of monitoring aircraft movements using ground-based radar and transmitting voice flight instructions to aircraft, air traffic controllers would primarily monitor automated systems and intervene when anomalies and emergencies occur. As a result, FAA and NASA need to research the human factors considerations associated with the new roles of both flight crew and air traffic management staff, and incorporate the results into the implementation of the new system. In addition, FAA and NASA will have to identify and develop the training necessary for these changing roles, including the time frame before NextGen is fully realized, when some aircraft will be equipped with NextGen systems and others will not. FAA and NASA structure their NextGen human factors R&D according to a planned three-phase implementation of the NextGen system to align and prevent duplication of NextGen R&D efforts. FAA—which is ultimately in charge of implementing NextGen—is mainly responsible for the R&D to help address near-term implementation (2009-2013), which addresses the day-to-day promotion of the safe and efficient operation of the current aviation system and the implementation of some NextGen systems, and midterm implementation (2012-2018), which consists of leveraging existing aircraft capabilities and introducing new aircraft capabilities to establish a foundation for a longer-term evolution of the aviation system. Within FAA, the Air Traffic Organization is responsible for implementing near- and midterm improvements in coordination with other FAA lines of business. Within the Air Traffic Organization, several offices have different roles in the development of NextGen. For example, within the NextGen and Operations Planning Office, the NextGen Integration and Implementation (NGII) office is tasked with monitoring the progress of NextGen development and implementation and facilitating necessary coordination. These offices are also responsible for ensuring that human factors R&D conducted by HFREG is integrated into NextGen. NASA is responsible for conducting research to help address far-term implementation (2018-2025). As researchers better define system concepts, NASA officials inform FAA officials about research results and FAA officials then use the results to further develop the system. Figure 3 shows the key FAA and NASA organizations involved in human factors activities. FAA and NASA officials take advantage of a number of existing mechanisms to coordinate their human factors R&D efforts. First, they use the Research, Engineering, and Development Advisory Committee (REDAC), which advises on FAA’s research, engineering, and development activities with experts from industry, academia, and other government agencies. REDAC was established in 1989 to advise the FAA Administrator on research and development needs in human factors, air traffic services, airport technology, aircraft safety, and environmental issues. According to officials from both agencies, their collaboration on REDAC helps to coordinate human factors R&D efforts. One of the REDAC subcommittees is devoted to human factors, and according to officials with HFREG and NGII, has provided important perspectives on research management and coordination among agencies, including human factors R&D. Several REDAC subcommittees have held meetings at NASA to facilitate its participation and ensure that REDAC is briefed on relevant NASA human factors projects as well as FAA’s human factors R&D efforts. NASA officials also use REDAC to brief FAA officials on their human factors R&D efforts as well. In 2007, FAA and NASA took steps to better coordinate their human factors efforts as a direct result of REDAC’s influence. The REDAC human factors subcommittee recommended that FAA and NASA exchange information about their human factors R&D efforts to better facilitate research coordination, which FAA and NASA did. In addition, in 2009, the subcommittee noted that while the agencies had improved coordination of human factors R&D, they could further improve coordination of FAA and NASA human factors R&D related to the NextGen Controller Efficiency Program. In response, officials with HFREG and NGII told us that they now review NASA human factors research announcements to determine their applicability for FAA NextGen R&D. NASA proposals encompass research that includes human factors issues as part of the proposed work. In addition, FAA and NASA take advantage of existing forums, meetings, and interagency agreements to coordinate their human factors R&D efforts. Officials with HFREG and NGII told us that FAA and NASA exchange R&D results through reports, presentations, and joint panel discussions at various seminars and professional conferences, including the annual Human Factors and Ergonomics Society conference. FAA officials added that they also attend NASA’s technical interchange meetings to share ideas, learn of NASA’s human factors research efforts, and coordinate research projects. FAA also exchanges R&D planning documentation with NASA annually and as needed to facilitate human factors R&D coordination activities. The agencies also have undertaken specific efforts to coordinate human factors R&D related to NextGen. FAA established research transition teams to address research gaps and coordinate research between FAA and NASA related to the primary NextGen systems. In September 2008, we reported that FAA and NASA established four research transition teams to outline how the two agencies will jointly develop research requirements. These teams help FAA and NASA identify R&D needed to implement NextGen and ensure that the research is not only conducted but effectively transitioned to the implementing agency. FAA is to provide requirements for users of the technologies, while NASA is to conduct the research and provide an understanding of the engineering rationale for design decisions. According to FAA, these research transition teams facilitate coordination and transition of new technologies and concepts related to NextGen, including human factors components. For example, FAA and NASA are using the research transition teams to coordinate human factors research on the roles and responsibilities of air traffic controllers and pilots, as well as their information needs and procedures, among other issues. In addition, over the past several years, FAA and NASA officials have established memorandums and interagency agreements that allow the agencies to collaborate on research projects and coordinate human factors R&D related to NextGen. The agreements include reimbursable interagency agreements between HFREG and NASA to leverage resources. According to interagency agreements and FAA officials, leveraging activities include: researching, modeling, and testing the advanced technologies, automation, and services and capabilities that are required for successful implementation of NextGen with particular emphasis on the issues associated with the NextGen flight deck, allowing collaborative research to develop NextGen data communications, human factors collision avoidance requirements, aircraft merging and spacing separation assurance systems, and guidance for use of NextGen synthetic vision systems, enhanced flight vision systems, and advanced cockpit vision technologies, and developing models, simulations, and demonstrations that will quantify efficiencies and benefits for the included programs, and evaluate the operational feasibility of concepts. HFREG has approved or initiated 35 human factors research activities in partnership with NASA, universities, and private corporations. Supporting flight deck human factors efforts for NextGen, HFREG has approved or initiated 22 NextGen human factors research activities. FAA funds the activities and plans to budget $45 million for them between fiscal year 2009 and fiscal year 2011. In addition, HFREG has approved or initiated 13 NextGen air traffic control human factors research activities. NASA, the Volpe National Transportation Systems Center, and academic and private research facilities and institutions are conducting much of the research, with the goal of providing scientific and technical information to support development of NextGen-related standards, procedures, training, policy and other guidance as well as human factors assessments of NextGen technologies and procedures. The research includes projects related to NextGen communication systems, automation and human roles and responsibilities, risk and error management, decision making, aircraft separation assurance and collision avoidance, ground operations, aircraft trajectory management, instrument procedures, personnel training and qualifications, and single pilot operations. NASA officials have agreed to consult HFREG officials about their NextGen human and automation roles and responsibility research and inform them about the research. In addition, FAA signed two 5-year interagency agreements with NASA in 2009 to provide NASA up to $19 million in funding for human factors research projects covering both flight deck and air traffic control issues. While FAA and NASA officials have taken many steps to coordinate their human factors R&D, JPDO issued a report in April 2008 that raised concerns regarding FAA and NASA coordination of human factors R&D for supporting NextGen. Specifically, JPDO reported that there was no cross-agency plan for identifying and addressing priority NextGen human factors issues and recommended that FAA, in cooperation with NASA, develop such a plan. JPDO recommended that FAA initiate an effort across agencies, industry, and academia to develop a cross-agency plan for NextGen human factors R&D that establishes focus areas for human factors research and development; inventories existing capabilities and laboratories for conducting human factors R&D; capitalizes on past and current human factors research and, where appropriate, reorients it; and ensures that the agencies perform the appropriate human factors R&D during the initial phases of NextGen. HFREG developed a human factors R&D portfolio in 2009 as part of its effort to improve cross-agency coordination of NextGen human factors R&D. Officials added that the portfolio is the beginning of their attempt to meet JPDO’s recommendation to develop a cross-agency human factors research plan. The portfolio lists and describes all past, ongoing, and planned NextGen human factors R&D projects. HFREG officials stated that the portfolio demonstrates the extent to which FAA and NASA human factors R&D efforts are aligned, and described the portfolio as a repository of NextGen human factors R&D. They added that the portfolio is intended to assist NextGen researchers in developing concepts, establishing requirements, identifying research gaps, and determining additional research and engineering considerations. FAA’s human factors portfolio is a good step toward better coordinating human factors R&D, but does not currently satisfy JPDO’s cross-agency plan recommendation. Our review of the FAA portfolio indicates that it is a listing and description of R&D projects and results, but not a cross- agency plan with features characteristic of plans, such as role definitions, goals, and time frames. Likewise, the DOT Inspector General reported in April 2010 that FAA has not developed a cross-agency research plan to identify and address how NextGen will affect the roles of controllers and pilots and help ensure that new concepts and technologies can be safely implemented. The Inspector General observed that such a plan would establish an agreed-upon set of initial focus areas for research, provide inventories of existing facilities for research, and capitalize on past and current research because both NASA and FAA conduct human factors work specifically for air traffic management. A cross-agency plan could help better ensure that FAA and NASA follow key collaboration practices. We have previously reported that federal agencies must effectively collaborate in order to deliver results more efficiently and in a way that is consistent with their multiple demands and limited resources. We identified several practices that could enhance and sustain collaboration efforts, including agreeing on roles and responsibilities, establishing mutually reinforcing or joint strategies, and establishing compatible policies, procedures, and other means to operate across agency boundaries, among other things. A cross-agency coordinating plan that establishes an agreed-upon set of initial focus areas for research, inventories existing facilities for research, and capitalizes on past and current research would help FAA and NASA more closely follow key practices for enhancing and sustaining collaboration. Our panel of nine human factors experts had mixed views about FAA’s and NASA’s efforts to improve coordination of their human factors R&D efforts. While some experts told us that the steps the agencies have taken in response to JPDO and REDAC recommendations are sufficient, others suggested that FAA and NASA could do more to improve their human factors coordination. Similarly, officials representing two aviation associations had mixed views regarding coordination; one association stated that NASA and FAA are well coordinated, while another stated that FAA and NASA need to provide more clarity and consensus on their coordination plans. Four of the nine experts stated that FAA and NASA were coordinating well on human factors research related to NextGen and did not suggest further actions the agencies could take to better coordinate research. However, five experts stated that FAA and NASA could better coordinate human factors research. They suggested hosting additional human factors conferences to improve coordination, and prioritizing coordination of NextGen human factors research. More specifically, two experts told us that while the agencies have held conferences and research workshops (as previously discussed), they have not held conferences specifically devoted to human factors research for supporting NextGen. According to FAA officials, hosting such conferences is very expensive, so HFREG tries to leverage hosting sessions at external conferences and annual meetings. For example, FAA officials sponsored a session on human factors issues related to NextGen at the Human Factors and Ergonomics Society’s Aerospace Systems Technical Group meeting in May 2008 and plan to hold another similar session at this year’s annual meeting in September. FAA and NASA have created and shared planning documents for how the agency will incorporate human factors R&D into NextGen. As previously noted, FAA has taken steps to standardize the way it integrates human factors considerations into all aviation projects. To this end, FAA developed a NextGen Human System Integration Roadmap to identify and address human factors R&D needs for supporting NextGen in particular. In addition, as previously discussed, FAA created the Human Factors Portfolio, which lists and describes all past, ongoing, and planned NextGen human factors R&D projects. According to FAA, the portfolio was intended to identify potential gaps and unfunded R&D needs across midterm and potential far-term operational improvements for NextGen. Although we find it currently lacking as a coordination tool, it does enumerate the NextGen projects that are under way, which could be useful in terms of monitoring the efforts of other stakeholders. In addition, HFREG officials told us that FAA has a range of human factors R&D initiatives that support NextGen. FAA not only conducts focus groups and interviews with a panel of human factors experts, but also conducts live simulations and field trials to evaluate system and human performance in different scenarios. For example, FAA conducted human simulations with pilots and air traffic controllers in fiscal year 2008 and planned further simulations for its High Density Airport Capacity and Efficiency Improvement Project in fiscal year 2009. The agency also conducts field surveys and interviews of operational personnel that are extensively used to address major NextGen and other aviation human factors issues that have an impact on the workforce. For example, FAA plans to conduct a survey to assess the degree of fatigue in the controller workforce. NASA also has human factors research efforts that support NextGen. Officials told us that NASA experiments with early concept technologies that will involve human interaction, thereby fully leveraging the strengths and mitigating the weaknesses of both the human and automated components. NASA staff then conduct simulations to test human compatibility and subsequently help FAA develop the technologies that prove themselves capable of supporting NextGen. Over the last 2 years, FAA has also dedicated financial resources specifically to incorporating human factors R&D into NextGen. Prior to fiscal year 2008, FAA used funding from its overall human factors R&D budget for NextGen projects, one of various types of human factors R&D; however, since fiscal year 2008, FAA has had a specific human factors research and development budget for NextGen. To incorporate human factors issues into NextGen, for example, conducting additional human simulations and field trials, FAA invested $25.5 million in human factors R&D specifically dedicated to NextGen from fiscal year 2008 through fiscal year 2010, and has requested additional funding for fiscal year 2011 to fiscal year 2013. NASA officials told us that NASA conducts applied human factors research across its Aviation Safety and Airspace Systems programs and does not have a specific line item budget for NextGen. According to these officials, this research addresses human factors considerations for new concepts and technologies applicable to NextGen. In addition, NASA’s Aeronautics Research Mission Directorate programs were realigned in 2006, causing difficulty in assessing funding trends across several years of similar research activities. For the most part, aviation human factors experts we interviewed stated that FAA’s and NASA’s human factors R&D efforts adequately support NextGen. For example, experts commended FAA and NASA for appropriately conducting human factors R&D according to the three-phase implementation structure for NextGen systems. As previously mentioned, FAA is mainly responsible for R&D to support near-term implementation and midterm implementation, while NASA conducts much of the research to address far-term implementation. One expert also told us that FAA, in response to REDAC input, has developed a good method for understanding likely human performance. NASA also has modeled NextGen systems to predict how beneficial NextGen systems will be to users. However, a majority of experts offered suggestions for further incorporating human factors issues into NextGen. Experts specifically identified the following suggestions: Better ensure that human factors issues are fully integrated throughout design and development of NextGen systems. Human factors must be considered and integrated throughout the design and development of aviation systems. Failure to fully consider human factors issues at all stages can increase costs and delay projects. Six of nine experts and a senior official at the Volpe National Transportation Systems Center were concerned that NextGen developers may not be adequately considering human factors R&D throughout the entire NextGen planning and implementation process. FAA has not fully integrated human factors considerations into the development of some aviation systems. For example, FAA did not fully address human factors considerations in developing the En Route Automation Modernization (ERAM) system, which FAA plans to complete by 2010. According to the National Air Traffic Controllers Association (NATCA), air traffic controllers involved in initial operations capabilities tests at an air traffic control center in Salt Lake City have come across significant problems with using the system. According to NATCA, controllers have found the new formats cumbersome, confusing, and difficult to navigate, thus indicating that FAA did not adequately involve those who operate the system (controllers) in the early phases of system development. As a result, to better ensure optimal performance of ERAM, FAA will have to address these human factors issues before it deploys the new system. This could increase the costs or delay the implementation of other components of NextGen, such as the previously mentioned Automatic Dependent Surveillance-Broadcast System, since the operation of numerous NextGen components will depend on this new system. FAA officials within the En Route Automation Modernization office agreed with NATCA’s views on the new system and added that the simulation capabilities of its Technical Center in Atlantic City, New Jersey, where the agency conducts human factors testing, were not robust enough to capture all of the problems subsequently identified by controllers. In May 2010, however, FAA announced the building of an Aviation Research and Technology Park near FAA’s Technical Center to provide a central location for partners in academia, industry, and other state and federal government agencies to work on NextGen. According to FAA, the park is being built with no direct cost to FAA and has amassed $3.5 million in grant funding. In June 2010, FAA issued a task order to MITRE Corporation to conduct a programmatic review of the ERAM problem and make an assessment of what circumstances led to the current delay, among other things. The MITRE Corporation is expected to issue a final report on October 1, 2010. Similarly, in reviewing the development of the Operational and Supportability Implementation System, the Department of Transportation’s Inspector General reported that FAA identified a number of significant human factors concerns with the system, such as inadequately addressing weather information. The Inspector General concluded that system developers did not adequately consider human factors research throughout design and development, thereby contributing to the delay of the system’s implementation. Similarly, as noted in a report we issued in 2005, FAA’s failure to provide adequate attention to human factors issues when implementing the Standard Terminal Automation Replacement System resulted in schedule slips and a significant cost increase of $500 million. As noted, however, since fiscal year 2008, FAA has designated funding solely for human factors R&D supporting NextGen. It remains to be seen if FAA’s added emphasis on human factors research and engineering will better ensure that human factors issues are fully integrated into the development of future NextGen components. Ensuring the mitigation of human factors issues also involves oversight of contractors. HFREG officials told us that they do not track vendors to make sure they are considering human factors R&D issues in their development, as this is a responsibility of the program managers who lead procurement efforts for FAA systems. However, once contracts are awarded, contractors are supposed to follow the contract specifications, which can include human factors system performance requirements. HFREG officials told us that in the past they collaborated with program office human factors coordinators to assess outside vendors’ compliance with human factors issues; they found that the contractors were not in compliance in all aspects, particularly human factors. In April 2010, the Department of Transportation’s Inspector General also expressed concern about FAA’s ineffective oversight of a contractor in developing NextGen systems, adding that NextGen implementation will require significant contract oversight. Furthermore, FAA’s post-implementation review of the Advanced Technologies and Oceanic Procedures system concluded that FAA and the contractors who developed the system did not, from a human factors perspective, develop the system to meet FAA’s needs. The post-implementation review recommended that for future systems, FAA should ensure that it articulates to contractors in unambiguous terms the human factors-related characteristics that the proposed system must meet. According to the Chief Scientist for NextGen and Operations Planning, a contractor developing an aviation system may have implemented human factors designs that were originally flawed or may have had a flawed methodology for incorporating human factors issues into system development. FAA program offices and contractors often support the incorporation of human factors consideration in a system by convening a panel of controllers and obtaining their feedback. Such a method may result in the controllers providing information regarding their preferences instead of information regarding the usability of the system to the controller panel. An alternative method may be to conduct a modeling effort that analyzes data on human performance for certain components of the system. HFREG officials also noted that under the best of circumstances, all major and most human factors issues should be identified and mitigated during system development, making it unusual for additional problems to arise when a system is being implemented. To address this issue, experts stated FAA should ensure system developers consider human factors in all phases of the development of aviation systems (as required by the Human Factors Policy Order). Having oversight of system developers (including contractors) that develop NextGen systems to make sure they adhere to FAA’s Human Factors Policy Order would significantly reduce the possibility of expensive and untimely delays. FAA has taken action to improve its oversight of contractors. For example, in its June 2010 letter to MITRE, FAA requested an assessment of the ERAM contractor’s program management procedures and practices as part of an overall review of the program. Improve collaboration of human factors efforts across FAA departments. Collaboration within FAA departments is important to ensure that aviation systems are designed and developed with agency input from human factors researchers. Several experts we interviewed stated that system development projects with a human factors research component take place in different departments and offices at FAA, and that those developing the systems do not always collaborate. While HFREG provides R&D and engineering support, HFREG officials told us that there is no requirement for program offices or developers to consult with HFREG. HFREG conducted a post-implementation review of the Advanced Technologies and Oceanic Procedures that implied that system managers did not properly consider human factor issues. This suggests that the system managers either did not consult human factors stakeholders (including HFREG) or did not fully address their human factors issues through a collaborative working relationship. As a result, the post-implementation review concluded that from a human factors perspective, the system that was implemented in the field was not the system FAA had asked for. FAA’s experience in developing the Advanced Technologies and Oceanic Procedures is an indication of what can happen when system developers fail to collaborate with human factors specialists and develop a comprehensive human factors program. To improve collaboration, HFREG officials also told us that the Chief Scientist of the NextGen and Operations Planning unit sponsored a technical interchange meeting in January 2010 to better ensure that all FAA units involved in NextGen development are aware of the need to fully consider human factors in their work. The Chief Scientist plans to host another technical interchange meeting on July 29, 2010. A majority of the experts we interviewed agree that strong leadership is needed to provide adequate consideration of human factors issues within NextGen. Furthermore, a September 2008 National Academy of Public Administration’s report identified leadership as the single most important element of success for large-scale systems integration efforts like NextGen. That report highlighted leadership as a NextGen implementation challenge. The critical impact of human factors issues on NextGen indicates that human factors issues require strong leadership to ensure they are a priority for NextGen. FAA has not prioritized consistently staffing the top two leadership positions within FAA that are formally responsible for human factors R&D. Specifically, the Chief Systems Engineer for Human Factors position has been vacant since the previous chief retired in January 2010. Moreover, FAA did not assign a permanent program director of HFREG for 16 months, from January 2009 until FAA filled the position in June 2010. The leadership void was the issue most frequently identified by the nine experts. Seven of nine experts we interviewed told us that the lack of leadership within FAA is a significant challenge in ensuring that human factors R&D supports NextGen. Although a majority of the experts were concerned that the leadership void could have prevented human factors issues from being fully considered for NextGen, subsequently delaying the implementation of a system, none could identify any specific examples. Nevertheless, FAA officials emphasized the importance of both positions. FAA officials told us that the Chief System Engineer position could be pivotal in integrating and maximizing the effectiveness of human factors in support of NextGen and is thus critical to prioritizing NextGen research and resources within FAA. JPDO officials we interviewed stressed that the program director of HFREG is the single most important position needed to ensure that the necessary human factors R&D is conducted and that the results are integrated into the development of NextGen systems. According to FAA officials, FAA has not had a chance to fill the position of Chief System Engineer—which FAA now refers to as the human factors integration lead—because of a hiring freeze and uncertainty as to which unit to put the position. FAA has resolved those issues and plans to begin the process for filling the position. Officials cautioned, however, that it may take a long time to find a qualified candidate with the right human factors expertise and other relevant skill sets. Nonetheless, FAA would like to fill the position by the close of fiscal year 2010. FAA officials also told us that it took a long time to fill the position of program director for HFREG, in part because of the long process of completing required personnel administrative procedures. The new program director of HFREG was formerly the acting program director and had been in that position since the previous program director left. The assignment to program director involved a change in the position classification that involved several time-consuming administrative procedures to address, according to HFREG officials and an FAA senior executive. Experts also told us in filling these positions that the new leaders should have adequate authority to make sure that human factors issues are considered (particularly early in system development) and prioritized during all phases of NextGen development. These positions currently lack the authority to ensure that human factors issues are addressed early and throughout the NextGen system development process. Such authority could mitigate the need to redesign these systems after implementation has begun, which can cause delays and add costs. For example, as previously discussed, it has been found that FAA’s human factors plans have not adequately addressed how humans will use newly developed NextGen weather information. One of the experts we consulted who has worked extensively with FAA on human factors R&D told us that a program director of the HFREG or Chief System Engineer who has adequate authority could have reviewed the weather information to ensure that human factors were fully integrated into that and other NextGen systems. However, in filling the position of program director of HFREG, FAA did not authorize the new program director with additional authority to review NextGen programs and ensure that human factors issues are addressed. HFREG officials told us that FAA is conducting a review of distribution among HFREG, service units, and other offices for responsibility and authority to conduct human factors activities to better serve the human factors needs of NextGen. Human factors research must be incorporated into NextGen to ensure that controllers, pilots, and other aviation system users can operate NextGen in a safe and efficient manner. To this end, FAA and NASA have pursued a wide range of efforts to incorporate human factors R&D into NextGen. However, these and future efforts will require a sustained focus not only across agencies but from the beginning to the end of the long process of developing a complex system like NextGen. Some suggest that FAA can meet this challenge by incorporating two elements into its human factors R&D efforts: a cross-agency plan developed in cooperation with NASA to identify, prioritize, and coordinate NextGen human factors issues, and strong and consistent leadership with the authority to not only prioritize human factors issues but ensure that they are taken into account throughout NextGen. We recommend that the Secretary of Transportation direct the FAA Administrator to take the following two actions: create a cross-agency human factors coordination plan in cooperation with NASA, as JPDO has previously recommended, that establishes an agreed-upon set of initial focus areas for research, inventories existing facilities for research, and capitalizes on past and current research of all NextGen issues, and assign a high priority to filling the vacancy of human factors integration lead and structure that position and the program director of HFREG position in a manner that provides the authority to ensure that human factors research and development is coordinated, considered, and prioritized in all phases of NextGen development. We provided a draft of this report to the Department of Transportation and NASA for review and comment. NASA had no comments. DOT agreed to consider the recommendations and provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to the Secretary of Transportation, FAA, NASA, and interested congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In response to your request, this report provides information on the status of the Department of Transportation’s Federal Aviation Administration’s (FAA) and National Aeronautics and Space Administration’s (NASA) efforts to incorporate human factors issues into the Next Generation Air Transportation System (NextGen). In particular, we sought to identify the extent to which (1) FAA’s and NASA’s human factors research and development (R&D) is coordinated, and (2) FAA’s and NASA’s human factors R&D supports NextGen. In determining the extent to which FAA’s and NASA’s human factors R&D is coordinated, we obtained and analyzed information provided by FAA and NASA officials on mechanisms in place to align human factors R&D efforts. We asked FAA and NASA officials to describe the mechanisms that are in place to coordinate the agency’s human factors R&D. We assessed the information FAA and NASA officials provided us regarding their coordination mechanisms by comparing those efforts with recommendations issued by the Joint Planning and Development Office (JPDO)—an interagency organization responsible for planning NextGen. In 2008, JPDO issued a cross-agency gap analysis that found FAA and NASA lacked a cross-agency plan for identifying and addressing priority NextGen human factors issues. We also assessed FAA’s and NASA’s coordination efforts by summarizing the views of nine external aviation human factors experts who reviewed and assessed FAA’s and NASA’s coordination mechanisms. See our discussion below for more detail regarding the nine aviation human factors experts. We also obtained the views of several aviation industry officials, including officials from the Aerospace Industries Association, Air Transport Association, Air Line Pilots Association, MITRE Corporation, National Air Traffic Controller Association, JPDO, Volpe National Transportation Systems Center, and the Boeing Corporation. We also reviewed relevant reports issued by GAO, the Inspector General of the Department of Transportation, and the National Academy of Public Administration. In determining the extent to which FAA’s and NASA’s human factors R&D supports NextGen, we obtained relevant planning documents from FAA and NASA and had FAA and NASA officials provide us with detailed descriptions of their human factors R&D efforts. We provided this information and other related planning documents to nine aviation human factors experts and representatives from three aviation industry associations and asked them about their views on the extent to which FAA’s and NASA’s human factors research supports NextGen. The experts provided suggestions that FAA and NASA could adopt to better incorporate human factors issues in developing NextGen, and we reported the suggestions that a majority of experts recommended FAA and NASA adopt. In addition, we obtained the views of several aviation industry officials identified above. In assessing FAA and NASA human factors R&D coordination and human factors R&D supporting NextGen, we summarized the views of nine aviation human factors experts. We took several steps to identify potential aviation human factors experts. First, we identified experts in human factors R&D that GAO had consulted in the past. We then asked cognizant FAA and NASA officials responsible for and knowledgeable about aviation-related human factors R&D to recommend experts in aviation- related human factors R&D. In addition, we conducted comprehensive Internet searches for human factors aviation experts. Finally, we asked experts identified in the first four steps to recommend other human factors aviation experts. Taking these steps enabled us to identify 25 potential experts. To make our final expert selection, we narrowed our selection of the 25 potential experts based on the following criteria: knowledge of aviation-related human factors research as determined by published research, such as human factors research related to aviation development, and knowledge of NextGen planning and implementation needs as determined by research, published work, and participation in NextGen seminars, conferences, and workshops. Applying the criteria listed above to the 25 potential experts resulted in a final selection of 11 experts who have significant knowledge in both aviation-related human factors R&D and more specifically human factors R&D pertaining to NextGen. We obtained and synthesized responses from 9 of the 11 aviation human factors experts. The experts we obtained responses from are listed in table 1. We interviewed an additional selected expert prior to finalizing our methodology and incorporated the expert’s views where appropriate in this report. In addition to the contact above, other key contributors to this report were Ed Laughlin, Assistant Director; Samer Abbas; Bert Japikse; Richard Hung; Michael Mgebroff; Tina Paek; and Amy Rosewarne.
To address challenges to the aviation industry's economic health and safety, the Federal Aviation Administration (FAA) is collaborating with the National Aeronautics and Space Administration (NASA) and other federal partners to plan and implement the Next Generation Air Transportation System (NextGen). NextGen will transform the current radar-based air traffic control system into a satellite-based system. Pilot and air traffic controller roles and responsibilities are expected to become more automated, thereby requiring an understanding of human factors, which studies how humans' abilities, characteristics, and limitations interact with the design of the equipment they use, environments in which they function, and jobs they perform. FAA and NASA are tasked with incorporating human factors issues into NextGen. As requested, this report discusses the extent to which FAA's and NASA's human factors research (1) is coordinated and (2) supports NextGen. To address these issues, GAO reviewed coordination mechanisms and planning documents and synthesized the views of nine aviation human factors experts. While FAA and NASA officials are coordinating their NextGen human factors research efforts in a variety of ways, they lack a cross-agency human factors plan for coordination. FAA and NASA have participated in research advisory committees and interagency research transition teams, signed interagency agreements, and held cross-agency meetings and conferences focused on human factors issues. FAA also created a human factors portfolio to identify and address priority human factors issues but not a cross-agency human factors coordination research plan in cooperation with NASA, as previously recommended by FAA's Joint Planning and Development Office (JPDO)--an interagency organization responsible for planning NextGen. As a result, FAA has not established an agreed-upon set of initial focus areas for research that identifies and capitalizes on past and current research and establishes focus areas for human factors research and development, among other things. The experts GAO contacted generally agreed that FAA's and NASA's human factors research efforts adequately support NextGen, but made several suggestions, including enhancing human factors research leadership, for further incorporating human factors issues into NextGen systems. FAA and NASA have undertaken a variety of human factors efforts to support NextGen, including, among other things, creating planning documents detailing how human factors research will be incorporated into NextGen and dedicating financial resources specifically to NextGen human factors research. While the human factors experts GAO interviewed stated that these efforts support NextGen, a majority offered the following suggestions for further integrating human factors issues into NextGen: (1) Better ensure that human factors issues are fully integrated throughout the development of NextGen systems. FAA did not do this in the development of past systems, a fact that led to schedule slippages and cost increases. (2) Improve collaboration of human factors efforts within FAA departments. (3) Establish strong leadership. A 2008 National Academy of Public Administration's report identified leadership as the single most important element of success for large-scale systems integration efforts like NextGen. FAA has not prioritized consistently staffing the top two human factors positions. Specifically, the position of the Chief Systems Engineer for Human Factors (now referred to as the human factors integration lead) has been vacant since January 2010. Moreover, FAA did not have a permanent program director of its Human Factors Research and Engineering Group from January 2009 until June 2010. These two positions currently lack the authority to ensure that human factors issues are addressed early and throughout the NextGen system development process to prevent the need to redesign these systems after implementation, which can cause delays and add costs. As a result, FAA may lack consistent leadership with the sufficient authority to not only prioritize human factors issues but ensure that human factors issues are addressed throughout NextGen. FAA should (1) create a coordination plan and (2) give priority to filling vacant leadership positions and provide the positions with authority for prioritizing human factors. FAA agreed to consider the recommendations.
In fiscal year 2005, the Social Security Administration (SSA) paid approximately $128 billion in cash benefits to about 12.8 million beneficiaries through the two largest federal programs available to persons with disabilities and their families: the Disability Insurance (DI) program and the Supplemental Security Income (SSI) program. Both programs serve those who are medically determined to be unable to engage in any substantial gainful activity due to a severe physical or mental impairment that is expected to last at least 12 months or result in death. Claimants must apply to SSA to receive disability benefits from these programs and if awarded benefits, claimants may also have to requalify for support through what are known as continuing disability reviews. In most of the country currently, claimants who are denied initial or continuing benefits by SSA may appeal their denials administratively up to three times, each time for review by a different adjudicatory entity. These entities are 1) the state disability determination service that performs the initial review of disability claims and, in most states, a reconsideration determination, 2) an administrative law judge (ALJ) in SSA’s Office of Disability Adjudication and Review, and 3) a group of appellate reviewing officials within SSA known as the Appeals Council. The number of claims or appeals reviewed at each level in 2005 were: over 2.6 million by state agencies, almost 520,000 by ALJs, and over 94,000 by the Appeals Council. Disability determinations at all of these levels are often complex and necessarily involve some degree of subjectivity by adjudicators, and the nature of these decisions have contributed to long-standing concerns about the extent to which adjudicators across the agency consistently interpret and implement SSA’s national disability policy. To help achieve more consistent application of policy between the state disability determination service level and the ALJ level, in 1996, SSA established the process unification rulings, a set of nine Social Security rulings for all SSA disability adjudicators to follow in matters involving difficult judgments, such as the weight to be given to opinions of claimants’ treating physicians versus medical opinions from other sources, and the evaluation of pain and other subjective symptoms. See appendix II for more details on process unification rulings. After claimants exhaust all administrative review options within SSA, they may then appeal their claims outside the agency to federal court. A claimant must first file an appeal with a federal district court within one of 12 federal judicial regions, known as judicial circuits. Figure 1 provides information on which states and territories are included in these circuits. In deciding the case, a district court judge or magistrate usually either affirms an agency decision, reverses the decision (essentially affirming the claimant’s case), or remands it back to SSA for further review. According to SSA officials, remanded cases are generally reviewed by the ALJ who made the original decision. Judges can also dismiss a case if its scope is outside the court’s legal jurisdiction. Furthermore, if SSA prefers not to defend a case that has been filed, usually because of an error it has identified, the agency may request that the judge remand the case back for the agency’s review. Court remands have implications for SSA’s workload, the types of decisions SSA adjudicators make on remanded cases, and the time claimants must wait for decisions on their cases. Generally, when cases are remanded, ALJs must perform new hearings, which could involve new evidence presented at the time of court reviews. These remanded cases add to the already high workloads that ALJs have in reviewing denials by the agency’s disability determination service offices. The load may also affect ALJ decisions: In its September 2006 report, the Social Security Advisory Board found a small correlation between increased ALJ workload and increased allowances. Furthermore, although remanded cases are given priority in the line of cases that must be reviewed by ALJs, a substantial amount of time may pass before new decisions can be made at this administrative level, and the ALJ’s decision may undergo another review by the Appeals Council. In fiscal year 2006, it took SSA nearly a year on average to process court remanded cases from the district courts. After a district court decision, both the claimant and SSA may appeal the case to a circuit court of appeals (also called an appellate court) and, beyond this, to the Supreme Court. However, few cases reach these appellate court levels and most disability cases are resolved in the district courts. According to SSA, no more than 20 district court cases have been appealed by the agency to the appellate courts each year since 2000. The Supreme Court has only reviewed four cases involving disability claims since 1991. See figure 2 for an overview of the disability appeals process. SSA is not obligated to follow a district court decision that conflicts with agency policies beyond that specific case. However, the agency is required to follow appellate court decisions for cases within that circuit, unless the agency seeks further judicial review. If the Supreme Court issues a decision, SSA is bound to follow the decision nationally. Several district, appellate, and Supreme Court decisions have affected disability policy in the past two decades. Appendix III outlines some cases that have resulted in such changes. SSA implemented its current policy of acquiescence in 1990 in response to the concerns of external stakeholders, including claimant representatives, that SSA had failed in the 1980s to offer timely and appropriate responses to appellate court decisions. With the acquiescence ruling, SSA agrees to follow the appellate court’s holding on new cases only when they fall within the jurisdiction of that appellate court. SSA rescinds an acquiescence ruling if one of the following occurs: 1) the Supreme Court overrules or limits the relevant appellate court decision; (2) an appellate court overrules or limits itself on the relevant issue; (3) Congress enacts a law that obviates the acquiescence ruling; or (4) SSA clarifies, modifies, or revokes the regulation or ruling that was the subject of the pertinent appellate court decision. With new regulations issued in March 2006, SSA began implementing the Disability Service Improvement (DSI) process in August 2006 on a limited basis—i.e., in states in the Boston Region—and plans to gradually roll out the initiative to other regions. The regulations include changes to the appeals process within the agency that could potentially affect the number and types of cases that will go to federal courts in the future. Among these changes is the gradual replacement of the Appeals Council with a Decision Review Board, designed to ensure the accuracy of SSA decisions and reduce remands from federal courts. The Board would only review select cases based on whether they are considered likely to have contained errors or involved new policies, rules, and procedures. Under the DSI process, claimants who are unhappy with ALJ decisions, therefore, could no longer turn to the Appeals Council, but rather must appeal directly to the federal courts. In our June 2006 testimony, we reported that the public and stakeholders were concerned that replacing the Appeals Council with a Decision Review Board may increase the number of cases appealed to, and thus the workloads of, the federal courts. In its response to these concerns, SSA officials maintained that DSI improvements will ultimately reduce the need for court appeals and also reduce remands. As part of its DSI initiative, the agency is making a systematic effort to collect and analyze data on court decisions in the course of training staff and keeping ALJs current. Such monitoring and data collection are consistent with the Office of Management and Budget’s and GAO’s internal control standards for all federal agencies. Between fiscal years 1995 and 2005, the number of disability appeals reviewed by the courts and decisions to remand these cases increased, and in the majority of remanded cases, claimants were subsequently granted benefits by SSA. In 2005, the year for which disaggregated data were available, GAO found the proportion of remands by district courts varied significantly by circuit. However, GAO did not find substantial variation by judicial circuit in SSA decisions on court remanded cases. We found that federal district courts reviewed an increasing number of disability cases over the past decade, which corresponded with the increasing number of cases processed by SSA. Although the number of cases reviewed by federal district courts fluctuated over time, they generally increased by 20 percent from about 10,300 in fiscal year 1995 to about 12,400 by fiscal year 2005. (See fig. 3.) According to SSA officials, the increase in the number of claims reviewed by the courts may be a result of the increase in the number of claims that passed through the Appeals Council, SSA’s final decision-making body, over the same time period. Over the same period, remands were generally the most common district court decision, and their proportion increased by 36 percent from 1995 to 2005. Of those SSA cases decided by the district courts on the merits and not dismissed, 50 percent were remanded, 44 percent were affirmed, and 6 percent were reversed on average. (See fig. 4.) Notably, the proportion of remands reached its peak in 2001. Although a range of factors may affect the extent of court remands, some SSA officials suggested that the Appeals Council, having reviewed a record number of ALJ decisions in 2000, may have made mistakes in a greater share of cases that were subsequently appealed to, then remanded by, the district courts. The proportion of remands exceeded the proportion of affirmances in 1997 and continued to increase until 2001. Specifically, in 1995 only 36 percent of SSA decisions were remanded by the courts while 57 percent were upheld or affirmed. However, by 1998, the proportion of remands increased to 49 percent, while the proportion of affirmances declined to 46 percent. When we showed SSA officials these trends, they generally attributed the shift to the process unification rulings, which the agency had established in 1996. According to SSA officials, the increased remands reflected district court efforts to assure that SSA adjudicators were following the agency’s new procedures. GAO found substantial variation in the proportion of cases remanded by judicial circuit in fiscal 2005, the only year for which data by circuit were available. (See fig. 5.) Although remands and affirmances were the most frequently occurring types of decision in each circuit, the proportion of each varied considerably among the circuits. Specifically, the percent of remands ranged from a low of 35 percent to high of 78 percent, while affirmances ranged from 22 percent to 61 percent. SSA officials were not in agreement about why there might be differences in the types of decisions across judicial circuits. According to some, differences might be due to judges in different circuits interpreting disability laws differently. Others told us that disparities in the number of claims appealed to district courts across circuits may contribute to these differences. (See app. IV, fig. 14 for more information on the number of cases reviewed by circuit for fiscal year 2005.) Currently, SSA does not have sufficient data that would allow them to determine why these decisions vary by circuit but plans to obtain this information as part of the DSI process implementation. Of the 57,000 cases remanded by the district courts between 1995 and 2005, SSA awarded benefits to the majority of claimants—about 66 percent—upon re-adjudication, with the remainder being denied (about 30 percent) or dismissed (5 percent). (See fig. 6.) Agency officials said the large percentage of awards in remanded cases were due, in part, to the fact that the lengthy period of the appeals process increased the likelihood that the nature or severity of claimants’ disabilities would change. The officials also attributed the awards to information in the court’s written judgments that made it possible for ALJs, in reviewing cases anew, to make more accurate decisions. The proportion of allowances in court- remanded cases after re-adjudication is just below the average allowance rate of 70 percent for all ALJ decisions. We did not find substantial variation in SSA decisions on court-remanded claims across judicial circuits. As shown in figure 7, the proportion of allowances for remanded cases ranged from 62 percent to 72 percent by circuit—relative to a national average of 66 percent. According to agency officials and stakeholders, a range of errors precipitated by heavy workloads is responsible for court remands of SSA’s disability determinations, but SSA data that would confirm or clarify reasons for remands are incomplete and not well managed. SSA has acknowledged the need to reduce remands and in 2006, along with other initiatives, introduced a new writing tool for ALJs in order to improve efficiency and better document decisions. However, agency data that would inform the problem and help address remands are incomplete and not well managed. Stakeholders commonly cited two reasons for remands: written explanations that did not support the decisions and inadequate documentation of consideration given to medical evidence. They expressed the view, however, that errors made with respect to documenting decisions were due, in large part, to heavy SSA adjudicator workloads. Poor decision writing by ALJs and their staff was cited by all groups of stakeholders we interviewed, including SSA officials, district court judges, claimant representatives, and other stakeholders. Specifically, district court judges said they did not always believe that SSA’s decisions were wrong, but that the written explanations did not always support those decisions. Some claimant representatives said that poorly written decisions may be symptomatic of improper consideration of evidence and procedures by ALJs. With regard to the inadequate documentation of consideration given to medical evidence as a reason for remands, district court judges and claimant representatives we interviewed said ALJs either do not document how they weighed treating physicians’ opinions and assessed claimant statements about pain and other symptoms, or they do not consider them as required by the process unification rulings. ALJs we interviewed responded that addressing such evidence is sometimes very difficult and cited cases in which the treating physician appeared to be simply repeating claimants’ opinions about their inability to work, rather than offering substantive information about the conditions that would prevent work. Some district judges agreed that considering and incorporating medical evidence into a decision can be difficult, but stressed the importance of articulated and well-documented opinions in order for district court judges to make a decision other than to remand. Stakeholders we interviewed varied in their opinions regarding whether requirements of the process unification rulings were overly cumbersome and, therefore, resulted in remands. Members of the Appeals Council and the Social Security Advisory Board staff we spoke with believe that the process unification rulings provide important guidance, but have also made procedures for making decisions and decision-writing more cumbersome. On the other hand, representatives of the Association of Administrative Law Judges told us that they have not heard such complaints and, while acknowledging that decision-making involved more work, believe the rules did not make decision-writing overly cumbersome. At the same time, many of those we interviewed, including ALJs and district court judges, said the heavy ALJ workload was behind the apparent errors in documenting agency determinations that lead to remands. Some ALJs asserted that the frequency of court remands has not been unreasonable considering the number of cases that they must review. These ALJs also said their workload expectations of 50 to 60 hearings a month affected the time and attention they could give to each case. They asserted that they would need to write significantly fewer decisions in a month in order to assure that the work would withstand scrutiny by the federal courts. They noted that other ALJs who are able to write decisions that the courts uphold produce as few as five a month. Because the time needed to review cases and write decisions varied, however, representatives of the Association of Administrative Law Judges were unable to suggest an ideal number of cases that would be reasonable for ALJs to process. Specifically, these representatives said that decisions to deny benefits take substantially longer to document than those involving allowances. These representatives also stated that the number and quality of staff that ALJs have available to help process and write decisions vary. Finally, stakeholders also suggested that a variety of other factors contribute to remands, such as: ALJs’ providing poor instructions to decision writers, SSA’s not providing adequate feedback to ALJs on reasons for remands, and federal courts’ having bias against ALJs’ decisions. Some stakeholders further stated that federal court bias may be rooted in concerns over how well decisions are generally written, expectations about how determinations should be made, and concerns with the amount of time and attention given to cases under the current workload. Acknowledging the need to address remands from the federal court, SSA is taking steps to mitigate common documentation errors. One step has been to promote the use of a decision-writing tool known as the Findings Integrated Templates (FIT). This tool contains more than 1,600 templates for presenting analysis of evidence and ensuring that required statutes and regulations are followed. These templates are also designed to prevent common mistakes, such as failure to establish an appropriate date for the onset of disability benefits. SSA officials also said this tool is intended to help manage workloads by reducing the potential for miscommunication between ALJs and their staff and the time spent writing decisions. According to SSA officials, SSA plans to monitor the extent to which decisions written with this tool are remanded from the federal courts. Appeals Council judges we interviewed have reviewed some decisions written with FIT and have found them to be better articulated than decisions that did not rely on this tool. However, both Appeals Council judges and ALJ association representatives mentioned that the tool will not replace the need for additional, competent decision-writing staff. Additionally, SSA is pursuing a broader set of initiatives under its Disability Services Improvement (DSI) initiative that it hopes will result in more accurate decisions earlier in the process and, thereby, ultimately reduce workloads at the ALJ level. For example, as a part of DSI, SSA is implementing an expedited determination process for clear-cut cases, which it calls its Quick Disability Determinations. The agency also plans to add a level of reviewing attorneys, known as federal reviewing officials, who can affirm, reverse, or modify appealed agency decisions prior to their reaching ALJs. However, DSI is currently underway only in the Boston Region, and SSA has yet to evaluate the effectiveness of this initiative. While SSA collects data on reasons for remands, we found that the data are not well managed, incomplete, and therefore not reliable. Two separate SSA offices recently began collecting data on remanded cases to identify and track the reasons for remands in order to help train ALJs and their staff on how to reduce the number of remands. Nevertheless, while the two offices were collecting and using the data for the same purpose— training—they told us that they were not collaborating. When the two offices—the Office of Disability Adjudication and Review (ODAR) and the OGC—developed lists of categories to group reasons for remands, the offices did not consult with each other. As a result, the lists of categories used by these offices are not the same, and SSA officials told us that the offices may well classify similar remands differently. Moreover, some remand categories in the two data systems may be duplicative, resulting in an inefficient use of agency resources. SSA officials acknowledged that better data reliability and collaboration between the two offices are needed and that, while the agency plans to develop a common vocabulary for remand reasons, it has yet to develop specific plans and timetables for addressing these issues. Through our conversations with SSA officials and reviews of reports, we also found that these data were not consistently entered into the agency’s databases. Within both systems, at least one reason should be entered per remanded case, but this did not always occur; instead, we found the extent to which this information was entered varied by database and SSA regional office. For the OGC reports, we found that the number of reasons recorded exceeded the number of cases, as would be expected; however officials were not confident that the data on remands reasons were accurate or complete because the officials have not been able to assess the quality of the data. Within the ODAR reports for fiscal years 2005 and 2006, on the other hand, there were substantially fewer reasons reported than cases. Regional reports showed that SSA’s Seattle and New York offices have been collecting the most information on remands. Notably, the agency’s Boston office––which is the first to implement the structural changes of DSI––and the Philadelphia office have collected the least amount of information. SSA officials told us that they were aware that remand data were not entered into ODAR’s system consistently in early fiscal year 2005, and said they subsequently reiterated the importance of collecting this information to staff. SSA officials also mentioned that they are considering making remand reasons a mandatory field in the ODAR database to improve collection. SSA officials have a process in place for determining whether appellate court decisions conflict with the agency’s interpretation of disability statutes or regulations, and the agency has taken steps in recent years to align its policies nationally with appellate court decisions. In those cases where the agency acceded to certain appellate court rulings by issuing acquiescence rulings, we found that about half of the rulings were eventually replaced with national policy. Also, we found that the number of acquiescence rulings has declined in more recent years, a decline that SSA officials mainly attributed to the agency’s implementation of its process unification rulings of 1996, which officials believe created less room for differences of opinion between the courts and the agency regarding broader policies. Moreover, we found that the timeliness of acquiescence rulings had improved since 1998, when SSA established a timeliness goal of 120 days. When an appellate court decision is rendered, SSA officials review the decision to determine whether it conflicts with agency interpretation of law or regulations. The primary office responsible for this evaluation is the OGC, SSA’s office responsible for legal matters. For disability issues, OGC works in conjunction with the Office of Disability Programs, SSA’s office responsible for policy matters. These offices may consult with the Office of Disability Adjudication and Review, which rendered the agency’s final decision prior to its being appealed to federal court, as well as the Department of Justice (DOJ), the entity generally responsible for representing SSA in federal court. If SSA determines that the appellate court decision conflicts with its policy, then it decides whether to appeal the case to the Supreme Court or to modify its policy to conform with that decision. According to officials, SSA rarely challenges appellate court decisions, and decisions to appeal are ultimately the prerogative of DOJ, because DOJ represents SSA in court. Some of the situations in which SSA would consider appealing to the Supreme Court are: a conflict between circuits; an issue of exceptional importance involving high visibility or significant funds; a statute or regulation held by the courts to be unconstitutional; or an important regulation held to be invalid. If SSA decides to follow the appellate court decision, it issues an acquiescence ruling that applies only within that circuit. However, because these rulings result in inconsistent policies throughout the country, the agency has added a clarification in the preamble to its 1998 regulations that acquiescence rulings are generally temporary policies that are not intended to remain in effect permanently. Therefore, after issuing an acquiescence ruling, SSA attempts to pursue a uniform national policy through various means, such as modifying regulations or rules, issuing new regulations or policy interpretations, seeking legislative changes, or re-litigating the issue within the same circuit. When SSA successfully incorporates the acquiescence ruling into national policy, it rescinds the acquiescence ruling. When SSA finds it necessary to issue an acquiescence ruling, it has procedures in place for informing adjudicators of these departures from national policy. According to officials, SSA communicates these and other rulings to SSA officials who make claims determinations, such as ALJs, through a variety of sources including: the Federal Register, SSA’s internal operations manual, the agency’s Web site, and e-mails. In some instances, officials learn about these rulings through training sessions. However, because most acquiescence rulings since the 1990s concerned narrow issues, SSA officials said the rulings have not warranted special training for adjudicators. SSA has taken steps to align its policies with the court decisions by issuing acquiescence rulings in a timely manner and following up with changes to its national policies. Since the implementation of its current acquiescence policy, SSA has issued 45 acquiescence rulings, the majority of which relate to determining whether a claimant is eligible for disability benefits. (Fig. 8 shows the number of rulings issued each year from 1990 to 2006, and app. V provides synopses of court holdings concerning disability determinations that led to acquiescence rulings.) Most of these rulings were issued between 1990 and 2000, when SSA published an average of four acquiescence rulings per year. In contrast, during the 6-year period from 2001 to 2006, the agency issued only five such rulings. SSA officials attributed the decline in acquiescence rulings to implementation of its process unification rulings, which they believe created less room for differences of opinion between the courts and the agency regarding broader policies. Specifically, officials commented that the process unification rulings clarified SSA policy as well as filled gaps in policy that were previously open for the courts to fill, and noted that, while the courts are not bound by these and other Social Security Rulings, the courts have frequently deferred to SSA’s rulings. As a result, SSA has seen a decline in the number of significant court cases involving disability law over time. (See app. III for a listing of key court cases.) We found that the number of acquiescence rulings issued by SSA varied by circuit during our study period (1990 to 2006), ranging from one in the First Circuit to eight in the Ninth Circuit. (See fig. 9.) SSA officials pointed out that the number of acquiescence rulings the agency issues in a given circuit is a function of the number and types of decisions issued by the appellate court within that circuit. For example, officials said that the Ninth Circuit has the largest disability caseload, and therefore, one would expect it to have the highest number of acquiescence rulings. Also, because the Ninth Circuit’s decisions largely concerned technical issues, SSA officials said they were less amenable to Supreme Court Review. This official added that the Ninth as well as Eighth Circuits have had precedent- setting decisions. Since SSA established a regulation in 1998 that included a timeliness goal for issuing acquiescence rulings, the promptness of issuances has improved. (Fig. 10 depicts the timeliness of acquiescence rulings issued from 1990 to 2006.) Prior to establishing the regulation, SSA took more than a year to issue over 80 percent of the rulings. Since then, 54 percent of acquiescence rulings were issued within the guideline of 120 days (or 4 months). For those rulings that were not issued within 120 days, in most instances the timeliness goal did not apply because SSA either sought further judicial review or needed to coordinate with DOJ or other federal agencies. Once SSA has issued acquiescence rulings, the agency has frequently succeeded in replacing them with uniform national policies. We found that since 1990, nearly half of all acquiescence rulings (21 of 45) were rescinded and replaced by more permanent guidance. Further, most of these rescissions resulted from the agency’s issuing or modifying rulings or regulations. (Fig. 11 shows how acquiescence rulings were rescinded.) According to officials, acquiescence rulings are most commonly rescinded when the agency revises, publishes, or revokes rules and regulations— actions that are fully within the agency’s control. Six other rescissions occurred through other means: three from Supreme Court rulings upholding SSA’s policies and three from changes in law made by Congress. However, according to SSA, some issues brought about by federal court decisions, such as those involving the Constitution or federal law, have led to acquiescence rulings that have not been rescinded by the agency. For example, acquiescence ruling 91-1(5), which involves a claimant’s right to cross-examine an examining physician, remains in effect because SSA officials believe the only option for rescinding the ruling would require re- litigating the case. However, according to SSA officials, the relevant circuit appellate court and the Supreme Court have declined to review this ruling. Other reasons that acquiescence rulings may remain in effect include a lack of practical implications of the acquiescence ruling for other circuits or the fact that an acquiescence ruling was only recently issued. Replacing the acquiescence ruling with nationwide policy typically takes a significant period of time—in one case, 16 years. On the whole, SSA has taken many steps to align its policies with court decisions and establish uniform national standards. The fact that the agency made some substantial changes to its policies in the mid-1990’s may account for the reduced incidence of acquiescence rulings in the past 5 years. On the other hand, the high proportion of remanded and awarded claims for the past decade has likely cost SSA additional time and resources to process, and may have impeded the timely award of benefits to eligible individuals. While the DSI improvement initiative is designed to ameliorate this problem, the lack of reliably collected and well-managed data on court remands is likely to inhibit that effort. Although SSA plans, through the implementation of DSI, to gradually address the heavy workload that has been cited by many for contributing to errors that lead to remands, the agency cannot pinpoint specific reasons for remands and take corrective action without more reliable data. To the degree that the agency does collect some data, the fact that collection is carried out by two different offices risks inconsistency and divergent interpretations. This lack of complete and consistent information ultimately undermines the agency’s ability to serve people with disabilities and their families. To ensure the agency has accurate and well-managed information to use in identifying corrective actions for reducing remands, we recommended that the Commissioner of SSA implement the following two measures: take steps to ensure the reliability of data on reasons for remands, and coordinate agency data collection on remands and ascertain how best to use this information to reduce the proportion of cases remanded by federal courts. SSA provided us with comments on a draft of this report, which we have reprinted in appendix VI. In its comments, SSA agreed with both of our recommendations for improving data on remands and outlined actions it plans to take to enhance data reliability and collection. Specifically, in an upcoming update to the Case Processing Management System, SSA plans to make the reasons for remands a mandatory data input field. In addition, SSA plans to establish an intercomponent work group to address issues related to remand data, and analyze data on the use of the Findings Integrated Templates and court decisions. SSA also provided technical comments which generally improved the accuracy of the report, and we have incorporated them as appropriate. Copies of this report are being sent to the Commissioner of SSA, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix VII. We designed our study to obtain information on (1) the trends of the past decade in the number of appeals reviewed by the district courts and their decisions; (2) the reasons for court remands and factors that may contribute to the incidence of those remands; and (3) SSA’s process for responding to appellate court decisions that conflict with agency policy and the agency’s response in recent years. To obtain information on these issues, we collected relevant quantitative and qualitative data from SSA; interviewed SSA officials and stakeholders within and outside the agency, such as district court judges, claimant representatives and experts; and reviewed agency policies and regulations that address appellate court rulings that conflict with SSA disability program policies. To determine the completeness and accuracy of data we obtained, we took steps, described below, and determined that these data, with the exception of reasons for remand, were sufficiently reliable for use in this report. We conducted this work between February 2006 and January 2007 according to generally accepted government auditing standards. To address the first research objective, we obtained national data from SSA on the number and decisions of cases reviewed by federal district courts—the first level of federal court review—for fiscal years 1995 to 2005 and analyzed these data for trends over time. Our analysis excluded cases that were dismissed because dismissals are generally decided on technical and procedural grounds rather than on the merits of the claim. For fiscal year 2005, the only year for which complete data were available, we obtained information from SSA on court decisions by state. We then categorized and analyzed these data by circuit. Furthermore, we obtained and analyzed agency data on the decisions SSA made on disability cases after they were remanded (i.e., allowances or denials of claims) for fiscal years 1995 to 2005. We also categorized and analyzed these data by circuit using information on the claimant’s state of residence. SSA officials were interviewed to gather information on potential reasons for any trends. In addition, we interviewed SSA officials and reviewed previously issued agency reports and data manuals to assess the reliability of these data. To address the second objective, we also obtained data on cited reasons for remands from two SSA databases, the Case Processing and Management System (CPMS), and the National Docketing/Management Information System (NDMIS), which are maintained by two separate offices in SSA responsible for re-adjudicating remanded cases and litigating claims in court. We compared the data to determine how and what SSA is reporting on reasons for remands within the agency. After interviewing agency officials and reviewing reports, we determined that these data were not sufficiently reliable for providing detailed information on reasons for remands, although some information was used to illustrate what SSA currently collects. In addition, we interviewed SSA officials and other stakeholder groups, including federal court judges and claimant representatives from the Seventh and Ninth circuits and experts, on reason for remands and factors that influenced them. Stakeholders from these two circuits were selected because these jurisdictions represent those with the lowest and highest numbers of SSA policy changes resulting from acquiescence rulings. Information from these interviews is not generalizable to all circuits or stakeholders. For the third objective, we interviewed SSA officials and obtained available documents on how SSA determines whether a court of appeals decision conflicts with its policies and what option to pursue to address conflicting decisions, e.g., appeal or issue an acquiescence ruling whereby the agency agrees to abide by the court judgment in future cases, albeit only in that jurisdiction. We also obtained data on the number of acquiescence and other rulings that SSA issued since establishing its policy of acquiescence in 1990. For acquiescence rulings, we further reviewed SSA’s timeliness in issuing acquiescence rulings as well as the number issued by circuit and how SSA replaced acquiescence rulings with nationwide policies. We were unable to independently determine the extent to which court decisions conflicted with SSA policy or whether SSA should have pursued one option over another. We also interviewed SSA officials and relevant stakeholders, including selected federal court judges and claimant representatives, to obtain information on how court decisions and their related agency rulings have affected SSA disability adjudication policy in recent years. SSR 96-1p: “Application by the Social Security Administration of Federal Circuit Court and District Court Decisions.” Policy interpretation stating that SSA decision-makers will be bound by SSA’s nationwide policy until an acquiescence ruling is issued and that SSA does not acquiesce to federal district courts within a circuit. SSR 96-2p: “Giving Controlling Weight to Treating Source Medical Opinions.” Policy guidance for applying the regulatory provision that requires the adoption of a treating source’s medical opinion on the nature and severity of an impairment when the opinion is not inconsistent with other substantial evidence in the claimant’s file and the opinion is supported by medically acceptable diagnostic techniques. SSR 96-3p: “Considering Allegations of Pain and Other Symptoms in Determining Whether a Medically Determinable Impairment is Severe.” Policy interpretation on the consideration of symptoms in determining whether an impairment is “severe” at step 2 of the sequential evaluation process. SSR 96-4p: “Symptoms, Medically Determinable Physical and Mental Impairments, and Exertional and Nonexertional Limitations.” Policy interpretation explaining, among other things, that symptoms are not medically determinable impairments; that limitations, not impairments, are categorized as “exertional” or “nonexertional”; and that symptoms may result in nonexertional or exertional limitations. SSR 96-5p: “Medical Source Opinions on Issues Reserved to the Commissioner.” Policy interpretation on evaluating medical source opinions on issues such as whether an individual’s impairment(s) meets or is equivalent in severity to the requirements of a listing in SSA’s Listing of Impairments; what an individual’s residual functional capacity is; whether an individual’s residual functional capacity prevents him from doing past relevant work; and how the vocational factors of age, education, and work experience apply. SSR 96-6p: “Consideration of Administrative Findings of Fact by State Agency Medical and Psychological Consultants and Other Program Physicians and Psychologists at the ALJ and Appeals Council Levels of Administrative Review; Medical Equivalence.” Policy interpretation regarding weight given to Disability Determination Services level medical and psychological consultant findings at the ALJ and Appeals Council levels. Explanation of requirements for ALJs and the Appeals Council to obtain the opinion of a physician or psychologist designated by the Commissioner in making a determination about equivalence to the listings. SSR 96-7p: “Evaluation of Symptoms in Disability Claims: Assessing the Credibility of an Individual’s Statements.” Policy interpretation on when the evaluation of symptoms, including pain, requires a finding about the credibility of an individual’s statements about pain and symptoms, and the factors to be considered in assessing the credibility of such statements. SSR 96-8p: “Assessing Residual Functional Capacity in Initial Claims.” Policy clarification of the term residual functional capacity and discussion of the elements considered in assessing residual functional capacity. SSR 96-9p: “Determining Capability to Do Other Work—Implications of a Residual Functional Capacity for Less Than a Full Range of Sedentary Work.” Policy interpretation on the impact of a residual functional capacity assessment for less than a full range of sedentary work on an individual’s ability to do other work. Heckler v. Campbell, 461 U.S. 458 (1983) The U.S. Supreme Court upheld SSA’s use of its vocational grid regulations. Hyatt v. Heckler, 579 F.Supp. 985 (W.D.N.C. 1984) In a class action, the U.S. District Court for the Western District of North Carolina found SSA’s policy on pain contrary to Fourth Circuit law. This ruling enjoined SSA from refusing to follow the law of the circuit. Lopez v. Heckler, 725 F.2d 1489 (9th Cir. 1984) The Ninth Circuit Court of Appeals enjoined SSA to uphold prior decisions requiring SSA to apply a medical improvement standard before terminating benefits. Stieberger v. Heckler, 615 F.Supp. 315 (S.D.N.Y. 1985) In a class action, the U.S. District Court for the Southern District of New York ruled that SSA had violated the rights of claimants by not following circuit court law on the weight to give treating physician evidence. After this decision SSA introduced its policy of Acquiescence Rulings when the agency is not willing to implement an appellate decision nationwide. Acquiescence rulings explain how SSA applies decisions of Courts of Appeals in the circuit in which the decision was rendered. Schisler v. Heckler, 787 F.2d 76 (2nd Cir. 1986) The Second Circuit Court of Appeals found that a treating physician’s opinion on the subject of medical disability is binding unless contradicted by substantial evidence. Hyatt v. Heckler, 711 F.Supp. 837 (W.D.N.C. 1989) On remand, the U.S. District Court for the Western District of North Carolina found SSA’s policies on pain did not conform to circuit law. The court ordered these policies to be cancelled and drafted a new ruling on pain for North Carolina adjudicators. Sullivan v. Zebley, 493 U.S. 521 (1990) The U.S. Supreme Court struck down SSA’s regulations for determining whether a child is disabled because the regulations denied benefits to children whose impairments did not meet or equal the listing of impairments and did not allow the child to qualify for benefits based on an individualized functional assessment. Schisler v. Sullivan, 3 F.3d. 563 (2nd Cir. 1993) The Second Circuit Court of Appeals upheld SSA’s 1991 regulations on the opinions of treating physicians as a valid use of SSA’s regulatory power. Hyatt class action settlement SSA agreed to re-adjudicate 77,000 cases under the 1991 regulations on the evaluation of pain and other symptoms. Barnhart v. Walton, 535 U.S. 212 (2002) The U.S. Supreme Court upheld SSA’s interpretation that the claimant’s inability to work last, or be expected to last, 12 months. The court also upheld SSA’s regulation precluding a finding of disability when the claimant returns to work within a 12-month period. Barnhart v. Thomas, 540 U.S. 20 (2003) The U.S. Supreme Court upheld denial of benefits to a claimant who was still able to do her previous work without determining whether that type of work continued to be available in the national economy. Appeals Council denials of Social Security disability claims increased by about 36 percent from about 48,300 in Fiscal Year 1994 to about 65,800 in Fiscal Year 2004. SSA decisions on disability claims following remands from federal district courts increased from about 3,000 in Fiscal Year 1995 to almost 7,500 in Fiscal Year 2005. The twelve judicial circuits with district courts that review Social Security disability claims varied in the number of claims they reviewed in Fiscal Year 2005. For example, the District of Columbia District Court reviewed less than 100 claims, while the district courts in the Ninth Circuit reviewed almost 3,000. The court held that social security regulations allow the use of a vocational expert only at step five of the sequential evaluation process; and therefore, reliance on a vocational expert is improper in making the step four determination as to whether a claimant can return to past relevant work. The court held that SSA can re-open an otherwise final administrative determination at any time when a claimant, who had no individual legally responsible for prosecuting the claim at the time of the prior determination, established a prima facie case that mental incompetence prevented him from understanding the procedure to request administrative review, unless SSA holds a hearing and determines that mental incompetence did not prevent the claimant from filing a timely appeal. The court held that entitlement to a subpoena for cross-examination purposes of an examining physician is automatic and must be granted. The court held that in deciding the appeal of a determination that an individual’s disability has medically ceased, the adjudicator must consider the issue of the individual’s disability through the date of the Secretary of Health and Human Services’ final decision, rather than only through the date of the initial cessation determination. The court held that an Appeals Council dismissal of a request for review of an ALJ decision for reasons of untimeliness is a “final decision” and subject to judicial review. The court held that a person’s return to substantial gainful activity within 12 months of the onset date of his or her disability, and prior to an award of benefits, does not preclude an award of benefits and entitlement to a trial work period. The court held that an initial determination in the Social Security or SSI programs must be reopened when the notice of the initial determination did not explicitly state that the failure to seek reconsideration results in a final determination, and the claimant did not pursue a timely appeal. The court held that a claimant for disability or SSI benefits who has an IQ score in the range covered by listing 12.05C and who cannot perform his or her past relevant work because of a physical or other mental impairment has per se established the additional and significant work-related limitation of function requirement. The court held that, in making a determination following an individual’s re-entitlement period that an individual with a disabling impairment has engaged in substantial gainful activity, the Secretary of Health and Human Services may not consider work and earnings by the individual in a single month rather than an average of work and earnings over a period of months. The court held that, in making a disability determination on a subsequent disability claim with respect to an un-adjudicated period, an adjudicator must adopt a finding regarding a claimant’s residual functional capacity, made in a final decision on a prior disability claim arising under the same title of the Social Security Act unless there is new and material evidence. The court held that, in order to find that the skills of a claimant who is close to retirement age are “highly marketable” within the meaning of the Secretary of Health and Human Services’ regulations, SSA must first establish that the claimant’s skills are sufficiently specialized and coveted by employers as to make the claimant’s age irrelevant in the hiring process and enable the claimant to obtain employment with little difficulty. The court held that a claimant for Disability Insurance or SSI benefits based on disability who has an amputation of a lower extremity and cannot afford the cost of a prosthesis has an impairment that meets the listings. The court held that, in making a disability determination on a subsequent disability claim with respect to an un-adjudicated period, where the claim arises under the same title of the Social Security Act as a prior claim on which there has been a final decision by an ALJ or the Appeals Council that the claimant is not disabled, SSA must: (1) apply a presumption of continuing nondisability and, if the presumption is not rebutted by the claimant, determine that the claimant is not disabled; and (2) if the presumption is rebutted, adopt certain findings required under the applicable sequential evaluation process for determining disability, made in the final decision by the ALJ or the Appeals Council on the prior disability claim. The court held that a person’s return to substantial gainful activity within 12 months of the onset date of his or her disability, and prior to an award of benefits, does not preclude an award of benefits and entitlement to a trial work period. The court held that a claimant for Disability Insurance benefits or SSI benefits based on disability who has mental retardation or autism with a valid IQ score in the range covered by Listing 12.05C and who cannot perform his or her past relevant work because of a physical or other mental impairment has per se established the additional and significant work-related limitation of function requirement of the regulations. The court held that, in making a disability determination or decision on a subsequent disability claim with respect to an un-adjudicated period, where the claim arises under the same title of the Social Security Act as a prior claim on which there has been a final decision by an ALJ or the Appeals Council, SSA must adopt the finding of the demands of a claimant’s past relevant work made in the prior decision unless new and material evidence or changed circumstances provide a basis for a different finding. The court held that in making a disability determination or decision on a subsequent disability claim with respect to an un-adjudicated period, where the claim arises under the same title of the Social Security Act as a prior claim on which there has been a final decision by an ALJ or the Appeals Council, SSA must adopt the finding of a claimant’s residual functional capacity made in the final decision by the ALJ or the Appeals Council on the prior disability claim unless new or additional evidence or changed circumstances provide a basis for a different finding. The court held that SSA is required to find that a claimant close to retirement age and limited to sedentary or light work has “highly marketable” skills before determining that the claimant has transferable skills and, therefore, is not disabled. The court held that SSA is required to find that a claimant close to retirement age and limited to sedentary or light work has “highly marketable” skills before determining that the claimant has transferable skills and, therefore, is not disabled. The court held that an Appeals Council dismissal of a request for review of an ALJ decision for reasons of untimeliness is a “final decision” and subject to judicial review. The court held that, in making a disability determination on a subsequent disability claim with respect to an un-adjudicated period, SSA must consider a finding of a claimant’s residual functional capacity made in a final decision by an ALJ or the Appeals Council on the prior disability claim as evidence and give it appropriate weight in light of all relevant facts and circumstances but that SSA does not have to adopt the finding. The court held that a determination of medical equivalence under the regulations must be based solely on evidence from medical sources. The court held that an ALJ, when receiving evidence from a vocational expert must ask the expert how the testimony or information corresponds to information provided in the Dictionary of Occupational Titles and must ask the expert to explain the difference if the testimony or evidence differs from the Dictionary. The court held that SSA has the burden of proving at step five of the sequential evaluation process that the claimant has the residual functional capacity to perform other work which exists in the national economy. The court held that a claimant’s return to substantial gainful activity within 12 months of the alleged onset date of his or her disability, and prior to an award of benefits, does not preclude an award of benefits and entitlement to a trial work period. The court held that SSA may not apply the Medical-Vocational Guidelines (grid rules) as a frame work to deny disability benefits at step 5 of the sequential evaluation process when a claimant has a nonexertional limitation without either: (1) taking or producing vocational evidence; or (2) providing notice of the agency’s intention to take official notice of the fact that the particular nonexertional limitation does not significantly erode the occupational job base. The court held that for cases concerning Listings 12.05 or 112.05 decided by ALJs or the Appeals Council before September 20, 2000, which have been remanded by the courts to SSA, the ALJ should apply the pre-September 20, 2000 version of the Listing as interpreted by the Seventh Circuit. The court held that for certain applicants under age 18, ALJs and Administrative Appeals Judges must make reasonable efforts to ensure that a qualified pediatrician or other specialist evaluates the case. Robert E. Robertson (Director), Michele Grgich (Assistant Director), Danielle Giese (Analyst-in-Charge), Susan Bernstein, Candace Carpenter, Joy Gambino, Suneeti Shah, Albert Sim, Ellen Soltow and Rick Wilson made significant contributions to this report. Luann Moy, Vanessa Taylor, and Walter Vance provided assistance with research methodology and data analysis. Daniel Schwimer provided legal counsel. Social Security Administration: Agency Is Positioning Itself to Implement Its New Disability Determination Process, but Key Facets Are Still in Development. GAO-06-779T. Washington, D.C.: June 15, 2006. Social Security Administration: Administrative Review Process for Adjudicating Initial Disability Claims. GAO-06-640R. Washington, D.C.: May 16, 2006. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. SSA’s Disability Programs: Improvements Could Increase the Usefulness of Electronic Data for Program Oversight. GAO-05-100R. Washington, D.C.: December 10, 2004. Social Security Administration: More Effort Needed to Assess Consistency of Disability Decisions. GAO-04-656. Washington, D.C.: July 2, 2004. Social Security Administration: Strategic Workforce Planning Needed to Address Human Capital Challenges Facing the Disability Determination Services. GAO-04-121. Washington, D.C.: January 27, 2004. Social Security Disability: Disappointing Results from SSA’s Efforts to Improve the Disability Claims Process Warrant Immediate Attention. GAO-02-322. Washington, D.C.: February 27, 2002. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1 Washington, D.C.: November 1999.
The Social Security Administration's (SSA) Disability Insurance and Supplemental Security Income programs provided around $128 billion to about 12.8 million persons with disabilities and their families in fiscal year 2005. Claimants who are denied benefits by SSA may appeal to federal courts. Through current initiatives, SSA is attempting to reduce the number of cases appealed to courts and remanded back to SSA for further review. In addition, there have been long-standing concerns about how SSA responds to court decisions that conflict with its policies. GAO was asked to examine: (1) trends over the past decade in the number of appeals reviewed by the courts and their decisions, (2) reasons for court remands and factors contributing to them, and (3) SSA's process for responding to court decisions that conflict with agency policy. GAO reviewed SSA data and documents on court decisions, remands and SSA's processes and interviewed agency officials and stakeholders on data trends, reasons for remands, and SSA processes. Between fiscal years 1995 and 2005, the number of disability appeals reviewed by the federal district courts increased, along with the proportion of decisions that were remanded. More disability claims were remanded than affirmed, reversed, or dismissed over the period, and the proportion of total decisions that were remands ranged from 36 percent to 62 percent, with an average of 50 percent. Remanded cases often require SSA to re-adjudicate the claim, with the result that--along with the passage of time and new medical evidence--the majority of remanded cases result in allowances. According to SSA officials and outside observers, a range of errors prompted by heavy workloads is responsible for court remands of SSA's disability determinations, but data that would confirm or clarify the issue are incomplete and not well-managed. SSA has only recently begun collecting data on remands, and we found these data to be incomplete. Additionally, this information is collected by two different offices that have created somewhat different categories for the data, making some of the information inconsistent and possibly redundant. Meanwhile, SSA has acknowledged the need to reduce remands and, in 2006 along with other initiatives, introduced new decision-writing templates to improve efficiency and reduce errors. SSA has a process in place for determining whether appellate court decisions conflict with the agency's interpretation of disability statutes or regulations and has taken steps in recent years to align its national policies with appellate court decisions. For example, officials and stakeholders attributed a downward trend in appellate court decisions that conflict with agency policy to significant policy changes instituted by SSA in the mid-1990s. In addition, for those cases where the agency acceded to conflicting appellate court decisions by issuing acquiescence rulings within the related circuits, we found that about half of the rulings issued were eventually replaced with national policy. Moreover, GAO found that the timeliness of acquiescence rulings had improved since 1998, when SSA established a timeliness goal of 120 days.
In the last several decades, Congress has passed various legislation to increase federal agencies’ abilities to identify and address the health and environmental risks associated with toxic chemicals and to address such risks. Some of these laws, such as the Clean Air Act; the Clean Water Act; the Federal Food, Drug and Cosmetic Act; and the Federal Insecticide, Fungicide, and Rodenticide Act authorize the control of hazardous chemicals in, among other things, the air, water, and soil and in food, drugs, and pesticides. Other laws, such as the Occupational Safety and Health Act and the Consumer Product Safety Act, can be used to protect workers and consumers from unsafe exposures to chemicals in the workplace and the home. Nonetheless, the Congress found that human beings and the environment were being exposed to a large number of chemicals and that some could pose an unreasonable risk of injury to health or the environment. In 1976, the Congress passed TSCA to provide EPA with the authority to obtain information on chemicals and regulate those substances that pose an unreasonable risk to human health or the environment. While other environmental and occupational health laws generally control only the release of chemicals in the environment, exposures in the workplace, or the disposal of chemicals, TSCA allows EPA to control the entire life cycle of chemicals from their production and distribution to their use and disposal. In October 2003, the European Commission presented a proposal for a new EU regulatory system for chemicals. REACH was proposed because the Commission believed that the current legislative framework for chemicals in the EU did not produce sufficient information about the effects of chemicals on human health and the environment. In addition, the risk assessment process was slow and resource-intensive and did not allow the regulatory system to work efficiently and effectively. Under REACH, authority exists to establish restrictions for any chemical that poses unacceptable risks and to require authorization for the use of chemicals identified as being of very high concern. These restrictions could include banning uses in certain products, banning uses by consumers, or even completely banning the chemical. Authorization will be granted if a given manufacturer can demonstrate that the risks from a given use of the chemical can be adequately controlled if a threshold can be determined for the chemical. Or, if no threshold can be determined, the manufacturer has to demonstrate that the socioeconomic benefits outweigh the risks associated with continued use and that there are no suitable alternatives or technologies available. In addition, a key aspect of REACH is that it places the burden on manufacturers, importers, and downstream users to ensure that they manufacture, place on the market, or use such substances that do not adversely affect human health or the environment. Its provisions are underpinned by the precautionary principle. REACH was approved in December 2006 and went into effect in June 2007. To avoid overloading regulators and companies with the work arising from the registration process, full implementation of all the provisions of REACH will be phased in over an 11-year period (or by 2018). TSCA does not require companies to develop information for either new or existing chemicals, whereas REACH generally requires companies to submit and, in some circumstances, requires companies to develop such information for both kinds of chemicals. For new chemicals, TSCA requires companies to submit to EPA any available human health and environmental data, but companies do not have to develop additional information unless EPA requires additional test data through a test rule or other EPA action. For existing chemicals, companies do not have to develop such information unless EPA requires them to do so. In contrast, companies generally are required under REACH to provide and develop where needed the European Chemicals Agency with health and environmental data. The extent of such data depends on the annual production volume of the chemical. TSCA does not require chemical companies to test new chemicals for their effect on human health or the environment, but it requires companies to submit such information if it already exists when they submit a premanufacture notice (PMN) notifying EPA of their intent to manufacture a new chemical. This notice provides, among other things, certain information on the chemical’s intended uses and potential exposure. TSCA also requires chemical companies to submit data and other information on the physical/chemical properties, fate, or health and environmental effects of a chemical, which we refer to in this report as "hazard information," that the companies possesses or is reasonably ascertainable by them when they submit a PMN to EPA. In part because TSCA does not require chemical companies to develop hazard information before submitting a PMN, EPA employs several other approaches for assessing hazards, including using models that compare new chemicals with existing chemicals with similar molecular structures for which test data on health and environmental effects are available. In June 2005, we recommended that EPA develop a strategy for improving and validating the models that EPA uses to assess and predict the hazards of chemicals. EPA is currently devising such a strategy, according to agency officials. EPA receives approximately 1,500 new chemical notices each year, half of which are exemption requests, and has reviewed more than 45,000 from 1979 through 2005. PMNs include information such as specific chemical identity estimated maximum production volume for 12 months of production a description of how the chemical will be processed and used and estimates of how many workers may be exposed to the chemical. Additionally, EPA requires that the following information be submitted with a PMN: all existing health and environmental data in the possession of the submitter, parent company, or affiliates, and a description of any existing data known to or reasonably ascertainable by the submitter. EPA estimates that most PMNs do not include test data of any type, and only about 15 percent include health and safety data—such as acute toxicity or skin and eye irritation data. In some cases, EPA may determine during the review process that more data are needed for an analysis of a chemical’s potential risks and often will negotiate an agreement with the chemical company to conduct health hazard or environmental effects testing. According to EPA, more than 300 testing agreements have been issued since EPA began reviewing new chemicals in 1979. In some cases, however, the chemical company may voluntarily withdraw the PMN rather than incur the costs of hazard testing requested by EPA, or for other reasons. EPA does not maintain records as to how many PMNs chemical companies have withdrawn because of potential EPA action. While TSCA does not require chemical companies to develop information on the harmful effects of existing chemicals on human health or the environment, TSCA provides that EPA, by issuing a test rule, can require such information on a case-by-case basis. Before promulgating such a rule EPA must find, among other things, that current data are insufficient, testing is necessary, and that either (1) the chemical may present an unreasonable risk or (2) the chemical is or will be produced in substantial quantities and that there is or may be substantial human or environmental exposure to the chemical. EPA officials responsible for administering the act said that TSCA’s test rule provision and data-gathering authorities can be burdensome and too time consuming for EPA to administer. Because EPA has limited information on existing chemicals and the difficulty in promulgating test rules, EPA uses voluntary programs to help gather more data to assess risks on certain chemicals. While TSCA authorizes EPA to require testing of existing chemicals, the act does not authorize the agency to do so unless EPA first determines on the basis of risk or exposure information that the chemicals warrant such testing. TSCA provides EPA the authority to obtain hazard information needed to assess chemicals by issuing rules under Section 4 of TSCA requiring chemical companies to test to determine the health and environmental effects of chemicals and submit the test data to EPA. However, in order for EPA to issue a test rule, the agency must determine that a chemical (1) may present an unreasonable risk of injury to health or the environment or (2) is or will be produced in substantial quantities and (a) there is or may be significant or substantial human exposure to the chemical or (b) it enters or may reasonably be anticipated to enter the environment in substantial quantities. EPA must also determine that there are insufficient data to reasonably determine or predict the effects of the chemical on health or the environment and that testing is necessary to develop such data. Once EPA has made the required determination, the agency can issue a proposed rule for public comment, consider the comments it receives, and promulgate a final rule ordering chemical testing. OPPT officials responsible for implementing TSCA told us that finalizing rules under Section 4 of TSCA can take from 2 to 10 years and require the expenditure of substantial resources. EPA has used its authority to require testing for about 200 existing chemicals since the agency began reviewing chemicals under TSCA in 1979. EPA does not maintain estimates of the cost of implementing these rules. However, in our September 1994 report on TSCA, we noted that EPA officials told us that issuing a rule under Section 4 can cost up to a $234,000. Given the difficulties and cost of requiring testing, EPA could review substantially more chemicals in less time if it had authority to require chemical companies to conduct testing and provide test data on chemicals once they reach a substantial production volume. In June, 2005, we stated that Congress may wish to consider amending TSCA to provide EPA such authority. As an alternative to formal rule making, EPA asserts that Section 4 of TSCA provides EPA implied authority to enter into "enforceable consent agreements" with chemical companies that would require them to conduct testing when there is insufficient data available to assess a chemical’s risk. EPA uses enforceable consent agreements to accomplish testing where a consensus exists among EPA, affected manufacturers and/or processors, and interested members of the public concerning the need for and scope of testing. According to EPA, these agreements allow greater flexibility in the design of the testing program and negotiating these agreements is generally less costly and time consuming than promulgating test rules. EPA has entered into consent agreements with chemical companies to develop tests for about 60 chemicals where the agency determined additional data were needed to assess the chemical’s risk. Under Section 8 of TSCA, EPA promulgates rules directing chemical companies to maintain records and submit such information as the EPA Administrator reasonably requires. This information can include, among other things, chemical identity, categories of use, production levels, by- products, existing data on adverse health and environmental effects, and the number of workers exposed to the chemical. Section 8(d) authorizes EPA to promulgate rules under which chemical companies are required to submit lists or copies of any health and safety studies to EPA. Finally, Section 8 requires chemical companies to report any information to EPA that reasonably supports a conclusion that a chemical presents a substantial risk of injury to health or the environment. According to EPA, the agency has issued about 50 Section 8(d) rules covering approximately 1,000 chemicals. As a result of these rules, EPA has received nearly 50,000 studies covering environmental fate, human health effects, and environmental effects. However, TSCA Section 8(d) only applies to existing studies and does not require companies to develop new studies. The TSCA Inventory Update Rule (IUR), currently requires chemical companies to report every 5 years to EPA the site and manufacturing information for chemicals in the TSCA inventory that they manufacture or import in amounts of 25,000 pounds or greater at a single site. For the most current reporting cycle and for subsequent reporting cycles, chemical companies must report additional information—such as uses, the types of consumer products the chemical will be used in—including those intended for use by children, and the number of workers who could potentially be exposed—for chemicals manufactured or imported in amounts of 300,000 pounds or more at a single site. In response to the lack of information on existing chemicals and the relative difficulty the agency faces in requiring companies to conduct additional testing under TSCA, EPA has taken efforts to increase the amount of the information it can access on chemicals by implementing a voluntary program called the High Production Volume (HPV) Challenge Program. The HPV Challenge Program focuses on obtaining chemical company sponsors to voluntarily provide data on approximately 2,800 chemicals that chemical companies reported in 1990 were domestically produced or imported at a high volume—over 1 million pounds. Through this program, sponsors develop a basic set of screening level information on the chemicals either by gathering available data, using models to predict the chemicals’ properties, or conducting testing of the chemicals. The six data endpoints collected under the HPV Challenge Program are acute toxicity, repeat dose toxicity, developmental and reproductive toxicity, mutagenicity, ecotoxicity, and environmental fate. EPA believes that these basic data are needed to make an informed, preliminary judgment about the hazards of HPV chemicals. In June 2005, we recommended that EPA develop a methodology for using information collected through the HPV Challenge Program to prioritize chemicals for further review. EPA’s Director of OPPT told us the agency developed such a methodology as data from chemical companies became available and are currently applying the methodology to assess HPV chemicals. The methodology was developed based on input received from an advisory committee, the National Pollution Prevention and Toxics Advisory Committee (NPPTAC). Despite these promising voluntary efforts regarding high-production- volume chemicals, several difficulties remain, as we have noted in our prior work. For example, (1) chemical companies have not agreed to test approximately 300 chemicals identified by EPA as high-production-volume chemicals; (2) additional chemicals will become high-production chemicals in the constantly changing commercial chemical marketplace; and (3) chemicals without a particularly high-production volume may also warrant testing, based on their toxicity and the nature of exposure to them. In addition, this program may not provide enough information for EPA to use in making risk-assessment decisions. While the data in the HPV Challenge Program and the new exposure and use reporting under the IUR may help EPA prioritize chemicals of concern, the data may not provide sufficient evidence for EPA to determine whether a reasonable basis exists to conclude that the chemical presents an unreasonable risk of injury to health or the environment and that regulatory action is necessary. Although the chemical industry may be willing to take action, even before EPA has the evidence required for rule making under TSCA, the industry is nonetheless large and diverse, and it is uncertain that all companies will always take action voluntarily. To ensure that adequate data are made publicly available to assess the special impact that industrial chemicals may have on children, EPA launched the Voluntary Children’s Chemical Evaluation Program (VCCEP). In December 2000, EPA implemented VCCEP first as a pilot program. EPA’s goal is to learn from this pilot program before a final VCCEP process is determined and before additional chemicals are selected. For the VCCEP pilot, EPA identified 23 commercial chemicals to which children have a high likelihood of exposure and the information needed to assess the risks to children from these chemicals. Recently, EPA requested comments on the implementation of the pilot program from stakeholders and other interested parties but has not yet responded to the comments or evaluated the program for its effectiveness. EPA is running a pilot of the VCCEP so it can gain insight into how best to design and implement the VCCEP in order to effectively provide the agency and the public with the means to understand the potential health risks to children associated with exposure to these and ultimately other chemicals to which children may be exposed. EPA intends the pilot to be the means of identifying efficiencies that can be applied to any subsequent implementation of the VCCEP. Another purpose for running the pilot is the opportunity it will offer to test the performance of the peer consultation process. For the VCCEP pilot, the purpose of the peer consultation process is to provide a forum for scientists and relevant experts from various stakeholder groups to exchange scientific views on the chemical sponsor's data submissions and in particular on the recommended data needs. Under the VCCEP pilot, EPA is pursuing a three-tiered approach for gathering information, with tier 3 involving more detailed toxicology and exposure studies than tier 2, and tier 2 involving more detailed toxicology and exposure studies than tier 1. EPA asked companies that produce and/or import 23 specific chemicals to volunteer to sponsor their chemical in the first tier of the VCCEP pilot. EPA selected these 23 chemicals because the agency believed them to be especially relevant to children’s chemical exposures, such as the presence of the chemical in human tissue or blood, in food and water children eat and drink, and in air children breathe. In addition, many of these chemicals were known to be relatively “data rich” in that chemical data were already available. Chemical companies have volunteered to sponsor 20 of the 23 chemicals in the VCCEP. EPA believes that these 20 chemicals provide an adequate basis for evaluating the VCCEP pilot. Chemical companies volunteering to sponsor a chemical under the program have agreed to make chemical-specific public commitments to make certain hazard, exposure, and risk assessment data and analyses publicly available. For toxicity data, specific types of studies have been assigned to each of the three tiers. For exposure data, the depth of exposure information increases with each tier. If data needs are identified through the peer consultation process, the sponsor will choose whether to volunteer for any additional data generation or testing and whether to provide additional assessments in subsequent tiers. However, company sponsors are under no obligation to volunteer for tiers 2 and 3, even if EPA determines additional information is needed. After the submission of tier 1 information and its review by the peer consultation group—consisting of scientific experts with extensive and broad experience in toxicity testing and exposure evaluations—EPA reviews the sponsor’s assessment and develops a response, focusing primarily on whether any additional information is needed to adequately evaluate the potential risks to children. If additional information is needed, EPA will indicate what information should be provided in tier 2. Companies will then be given an opportunity to sponsor chemicals at tier 2. EPA plans to repeat this process to determine whether tier 3 information is needed. Information from all three tiers may not always be necessary to adequately evaluate the risk to children. According to EPA officials, since the program’s inception, sponsors have submitted 15 of the 20 assessments on chemicals to EPA and the peer consultation group. The peer consultation group has issued reports on 13 of the 15 chemical submissions. EPA has issued Data Needs Decisions on 11 of these 13 chemicals for which EPA determined that 5 chemicals needed additional data. One of the sponsors agreed to commit to tier 2 and to provide the additional data to EPA. The sponsor of two other chemicals declined to commit to tier 2 since it had ceased manufacturing the chemicals in 2004. The sponsor of the other 2 chemicals told EPA it will decide whether to commit to the additional testing by the end of July 2007. In November 2006, EPA requested comments on the implementation of the pilot program from stakeholders and interested parties. As part of its request for comments, EPA included a list of questions that the agency believed would be helpful in its evaluation of the pilot program. The questions ranged from asking about the sufficiency of the hazard, exposure, and risk assessments provided by the chemical sponsors; to the effectiveness and efficiency of the peer review panel; to the timeliness of the VCCEP pilot in providing data. EPA received comments from 11 interested parties, including from industry representatives, environmental organizations, children’s health advocacy groups, and other interested parties. Generally, the industry groups provided positive comments about the pilot while the children’s health advocacy and environmental groups provided negative comments about VCCEP. For example, the American Chemistry Council commented that the pilot is proceeding well, the current tiered approach is sound, and that only minimal improvements are needed. One of the improvements the chemistry council suggested is that EPA should make the data generated under the pilot more accessible to the public, other EPA program offices, and to other federal and state agencies. Conversely, the American Academy of Pediatrics commented that the VCCEP pilot is failing in its goal to provide timely or useful information on chemical exposures and their implications to the public or to health care providers. EPA plans to prepare a comments document summarizing the comments received from the stakeholders and publish it on the VCCEP Web site. In addition, EPA plans to have a final evaluation of the effectiveness of the VCCEP pilot in late 2007. REACH created a single system for the regulation of new and existing chemicals and, once implemented, will generally require chemical companies to register chemicals produced or imported at 1 ton or more per producer or importer per year with a newly created European Chemicals Agency. Information requirements with registration will vary according to the production volume and suspected toxicity of the chemical. For chemicals produced at 1 ton or more per producer or importer per year, chemical companies subject to registration will be required to submit information for the chemical, such as the chemical’s identity; how it will be produced; how it will be used; guidance on its safe use; exposure information; and study summaries of physical/chemical properties and their effects on human health or the environment. REACH specifies the amount of information to be included in the study summaries based on the chemical’s production volume, i.e., how much of the chemical will be produced or imported each year. The information requirements may be met through a variety of methods, including existing data, scientific modeling, or testing. REACH separates the production volume information requirements into four metric tonnage bands—1 ton or more, 10 tons or more, 100 tons or more, and 1,000 tons or more. Hazard information must be submitted for each tonnage band with each higher band requiring the information for the lower bands in addition to the ones specified for that band. For example, at the one or more tonnage band, REACH requires information on environmental effects that include short- term toxicity on invertebrates, toxicity to algae, and ready biodegradability. At the 10 or more tonnage band, REACH requires such information in addition to a chemical safety assessment, which includes an assessment of the chemical’s human health and environmental hazards; a physiochemical hazard assessment; an environmental hazard assessment; and an assessment of the chemical’s potential to be a persistent, bioaccumulative, and toxic pollutant, which are chemicals that create pollutants that persist in the environment, bioaccumulate in food chains, and are toxic. Table 1 shows the total number of chemical endpoints—the chemical or biological effect that is assessed by a test method—required for chemicals produced at various production volumes, where applicable, for TSCA, the HPV Challenge Program, and REACH. While industry participation in the EPA’s HPV Challenge Program is voluntary, we have included information on the number of endpoints to be produced for chemicals in the program for comparison purposes. As the table shows, companies will provide a greater number of endpoints on chemicals under REACH than TSCA or the HPV Challenge Program. Additionally, appendix IV provides a listing of specific information requirements or endpoints for three testing categories: physical/chemical, human health, and environmental effects/fates. Both TSCA and REACH provide regulators with authorities to control chemical risks by restricting the production or use of both new and existing chemicals. Under TSCA, EPA must generally compile data needed to assess the potential risks of chemicals and must also develop substantial evidence in the rule-making record in order to withstand judicial review. However, REACH is based on the principle that chemical companies—manufacturers, importers, and downstream users—should ensure that the chemicals they manufacture, place on the market, or use do not adversely affect human health or the environment. Even when EPA has toxicity and exposure information on existing chemicals, the agency has had difficulty demonstrating that chemicals present or will present an unreasonable risk and that they should have limits placed on their production or use. Since the Congress enacted TSCA in 1976, EPA has issued regulations under Section 6 of the act to limit the production or restrict the use of five existing chemicals or chemical classes. The five chemicals or chemical classes are polychlorinated biphenyls (PCB), fully halogenated chlorofluoroalkanes, dioxin, asbestos, and hexavalent chromium. In addition, under Section 5(a)(2) of TSCA, for 160 existing chemicals, EPA issued significant new use rules that require chemical companies to submit notices to EPA prior to commencing the manufacture, import, or processing of the substance for a significant new use. In order to regulate an existing chemical under Section 6(a) of TSCA, EPA must find that there is a reasonable basis to conclude that the chemical presents or will present an unreasonable risk of injury to health or the environment. Before regulating a chemical under Section 6(a), the EPA Administrator must consider and publish a statement regarding the effects of the chemical on human health and the magnitude of human exposure to the chemical; the effects of the chemical on the environment and the magnitude of the environment’s exposure to the chemical; the benefits of the chemical for various uses and the availability of substitutes for those uses; and the reasonably ascertainable economic consequences of the rule, after consideration of the effect on the national economy, small business, technological innovation, the environment, and public health. Further, the regulation must apply the least burdensome requirement that will adequately protect against such risk. For example, if EPA finds that it can adequately manage the unreasonable risk of a chemical through requiring chemical companies to place warning labels on the chemical, EPA could not ban or otherwise restrict the use of that chemical. Additionally, if the EPA Administrator determines that a risk of injury to health or the environment could be eliminated or sufficiently reduced by actions under another federal law, then TSCA prohibits EPA from promulgating a rule under Section 6(a) of TSCA, unless EPA finds that it is in the public interest considering all aspects of the risk, the estimated costs of compliance, and the relative efficiency of such action to protect against risk of injury. Finally, EPA must also develop substantial evidence in the rule-making record in order to withstand judicial review. Under TSCA, a court reviewing a TSCA rule “shall hold unlawful and set aside…if the court finds that the rule is not supported by substantial evidence in the rule-making record.” According to EPA officials responsible for administering TSCA, the economic costs of regulating a chemical are usually more easily documented than the risks of the chemical or the benefits associated with controlling those risks, and it is difficult to show by substantial evidence that EPA is promulgating the least burdensome requirement. According to EPA officials in OPPT who are responsible for implementing TSCA, the use of Section 6(a) has presented challenges as the agency must, in effect, perform a cost-benefit analysis, considering the economic and societal costs of placing controls on the chemical. Specifically, these officials say that EPA must take into account the benefits provided by the various uses of the chemical, the availability of substitutes, and the reasonably ascertainable economic consequences of regulating the chemical after considering the effects of such regulation on the national economy, small business, technological innovation, the environment, and public health. EPA’s 1989 asbestos rule illustrates the evidentiary requirements that TSCA places on EPA to control chemicals under TSCA Section 6(a). The rule prohibited the future manufacture, importation, processing, and distribution of asbestos in almost all products. Some of the manufacturers of these asbestos products filed suit against EPA, arguing that the rule was not promulgated on the basis of substantial evidence regarding unreasonable risk. In October 1991, the U.S. Court of Appeals for the Fifth Circuit agreed with the manufacturers, concluding that EPA had failed to muster substantial evidence to justify its asbestos ban and returning parts of the rule to EPA for reconsideration. In reaching this conclusion, the court found that EPA did not consider all necessary evidence and failed to show that the control action it chose was the least burdensome reasonable regulation required to adequately protect human health or the environment. As articulated by the court, the proper course of action for EPA, after an initial showing of product danger, would have been to consider the costs and benefits of each regulatory option available under Section 6, starting with the less restrictive options, such as product labeling, and working up through a partial ban to a complete ban. The court further criticized EPA's ban of asbestos in products for which no substitutes were currently available stating that, in such cases, EPA “bears a tough burden” to demonstrate, as TSCA requires, that a ban is the least burdensome alternative. The court’s decision on the asbestos rule is especially revealing about Section 6 because EPA spent 10 years preparing the rule. In addition, asbestos is generally regarded as one of the substances for which EPA has the most scientific evidence or documentation of substantial adverse health effects. Since the U.S. Court of Appeals for the Fifth Circuit’s ruling in October 1991, EPA has not used TSCA Section 6 to restrict any chemicals. However, EPA has used Section 6 to issue a proposed ban on certain grouts, which was later withdrawn when industry agreed to use personal protection equipment to address worker exposure issues, and issue an Advance Notice of Proposed Rule Making for methyl-t-butyl ether because of widespread drinking water contamination. Although TSCA’s Section 6 has been used infrequently, the Director of OPPT and other EPA officials responsible for implementing TSCA told us that they believe that taking action under this section remains a practicable option for the agency. Section 5(a)(2) requires chemical companies to notify EPA at least 90 days before beginning to manufacture or process a chemical for a use that EPA has determined by rule is a significant new use. EPA has these 90 days to review the chemical information in the premanufacture notice and identify the chemical’s potential risks. Under Section 5(e), if EPA determines that there is insufficient information available to permit a reasoned evaluation of the health and environmental effects of a chemical and that (1), in absence of such information, the chemical may present an unreasonable risk of injury to health or the environment or (2) it is or will be produced in substantial quantities and (a) it either enters or may reasonably be anticipated to enter the environment in substantial quantities or (b) there is or may be significant or substantial human exposure to the substance, then EPA can issue a proposed order or seek a court injunction to prohibit or limit the manufacture, processing, distribution in commerce, use, or disposal of the chemical. Under Section 5(f), if EPA finds that the chemical will present an unreasonable risk, EPA must act to protect against the risk. If EPA finds that there is a reasonable basis to conclude that a new chemical may pose an unreasonable risk before it can protect against such risk by regulating it under Section 6 of TSCA, EPA can (1) issue a proposed rule, effective immediately, to require the chemical to be marked with adequate warnings or instructions, to restrict its use, or to ban or limit the production of the chemical or (2) seek a court injunction or issue a proposed order to prohibit the manufacture, processing, or distribution of the chemical. According to the Director of OPPT, it is less difficult for the agency to demonstrate that a chemical “may present” an unreasonable risk than it is to show that a chemical “will present” such a risk. Thus, EPA has found it easier to impose controls on new chemicals when warranted. Despite limitations in the information available on new chemicals, EPA’s reviews have resulted in some action being taken to reduce the risks of over 3,800 of the 33,000 new chemicals that chemical companies have submitted for review since 1979. These actions included, among other things, chemical companies voluntarily withdrawing their notices of intent to manufacture new chemicals, and entering into consent orders with EPA to produce a chemical only under specified conditions. In addition, EPA has promulgated significant new use rules requiring chemical companies to notify EPA of their intent to manufacture or process certain chemicals for any uses that EPA has determined to be a "significant new use." For over 1,700 chemicals, companies withdrew their PMNs sometimes after EPA officials indicated that the agency planned to initiate the process for placing controls on the chemicals, such as requiring testing or prohibiting the production or certain uses of the chemical. The Director of OPPT told us that after EPA has screened a new chemical or performed a detailed analysis of it, chemical companies may drop their plans to market the chemical when the chemical’s niche in the marketplace is uncertain and EPA requests that the company develop and submit test data or apply exposure controls. According to EPA officials, companies may be uncertain that they will recoup costs associated with the test data and controls and prefer to withdraw their PMN. In addition, for over 1,300 chemicals, EPA issued orders requiring chemical companies to implement workplace controls or practices during manufacturing pending the development of information on the risks posed by the chemicals and/or to perform toxicity testing if the chemicals’ production volumes reached certain levels. For over 570 of the 33,000 new chemicals submitted for review, EPA required chemical companies to submit notices for any significant new uses of the chemical, providing EPA the opportunity to review the risks of injury to human health or the environment before new uses begin. For example, in 2003, EPA promulgated a significant new use rule requiring chemical companies to submit a notice for the manufacture or processing of substituted benzenesulfonic acid salt for any use other than as described in the PMN. To control chemical risks, REACH provides procedures for both authorizing and restricting the use of chemicals. Authorization procedures under REACH have three major steps. First, the European Chemicals Agency will publish a list of chemicals—known as the candidate list—that potentially need authorization before they can be used. The chemical agency will determine which chemicals to place on the candidate list after it has reviewed the information that chemical companies submit to the agency at the time the chemicals are registered under REACH and after considering the input provided by individual EU member states and the European Commission. In making this determination, the agency is to use criteria set forth in REACH, covering issues such as bioaccumulation, carcinogenicity, and reproductive toxicity. Secondly, the European Commission will determine which chemicals on the candidate list will require authorization and which will be exempted from the authorization requirements. According to the Environment Counselor for the Delegation of the European Commission to the United States, some chemicals may be exempted from authorization requirements because, so far, sufficient controls established by other legislation are already in place. Finally, once a chemical has been deemed to require authorization, a chemical company will have to apply to the European Commission for an authorization for each use of the chemical. The application for authorization must include an analysis of the technical and economic feasibility of using safer substitutes and, if appropriate, information about any relevant research and development activities by the applicant. If such an analysis shows that suitable alternatives are available for any use of the chemical, then the application must also include a plan for how the company plans to substitute the safer chemical for the chemical of concern in that particular use. The European Commission is generally required to grant an authorization if the applicant meets the burden of demonstrating that the risks from the manufacture, use, or disposal of the chemical can be adequately controlled, except for (1) PBTs; (2) very persistent, very bioaccumulative chemicals (vPvBs); and (3) certain other chemicals including those that are carcinogenic or reproductive toxins. However, even these chemicals may receive authorization if a chemical company can demonstrate that social and economic benefits outweigh the risks. In addition, 6 years after REACH goes into effect (or in 2013), the European Commission will review whether endocrine disrupters should also be excluded from authorization unless chemical companies can demonstrate that the social and economic benefits outweigh their risks. Eventually, all chemicals granted authorizations under REACH will be reviewed to ensure that they can be safely manufactured, used, and disposed. The time frame for such reviews will be determined on a case- by-case basis that takes into account information such as the risks posed by the chemical, the availability of safer alternatives, and the social and economic benefits of the use of the chemical. For example, if suitable substitutes become available, the authorization may be amended or withdrawn, even if the chemical company granted the authorization has demonstrated that the chemical can be safely controlled. In addition to such authorization procedures, REACH provides procedures for placing restrictions on chemicals that pose an unacceptable risk to health or the environment. The restriction may completely ban a chemical or limit its use by consumers or by manufacturers of certain products. REACH’s restrictions procedures enable the EU to regulate communitywide conditions for the manufacture, marketing, or use of certain chemicals where there is an unacceptable risk to health or the environment. Proposals for restrictions will be prepared by either a Member State or by the European Chemicals Agency at the request of the European Commission. The proposal must demonstrate that there is a risk to human health or the environment that needs to be addressed at the communitywide level and to identify the most appropriate set of risk reduction measures. Interested parties will have an opportunity to comment on the restriction proposal. However, the final determination on the restriction proposal will be made by the European Commission. Because no chemicals have undergone REACH’s authorization and restriction procedures, it is not possible to comment on the ability of these procedures to control the risks of chemicals to human health or the environment. TSCA and REACH require public disclosure of certain information on chemicals and both laws protect confidential or sensitive business information, although the extent to which information can be claimed as confidential or sensitive varies under the two laws. In this regard, one of the objectives of REACH is to make information on chemicals more widely available to the public. Accordingly, REACH places greater limitations on the kinds of information that companies may claim as confidential or sensitive. TSCA has provisions to protect information claimed by chemical companies as confidential or sensitive business information, such as information on chemical production volumes and trade secret formulas. Health and safety studies, however, generally cannot be considered confidential business information, and TSCA has provisions for making such studies available to the public. Additionally, EPA can disclose confidential business information when it determines such disclosure is necessary to protect human health or the environment from an unreasonable risk. EPA interprets the term health and safety study broadly and, as such, it may include but is not limited to epidemiological, occupational exposure, toxicological, and ecological studies. However, TSCA generally allows chemical companies to claim any information provided to EPA, other than health and safety studies, as confidential. TSCA requires EPA to protect the information from unauthorized disclosure. More specifically, TSCA restricts EPA’s ability to share certain information it collects from chemical companies, such as information about the company (including its identity); the chemical’s identity; or the site of operation, including with state officials or with officials of foreign governments. If a request is made for disclosure of the confidential information, EPA regulations require the chemical company to substantiate the claims by providing the agency information on a number of issues, such as whether the identity of the chemical had been kept confidential from competitors and what harmful effects to the company’s competitive position would result from publication of the chemical on the TSCA inventory. State environmental agencies and others are interested in obtaining chemical information, including that claimed as confidential, for use in various activities, such as developing contingency plans to alert emergency response personnel of the presence of highly toxic substances at local manufacturing facilities. Likewise, the general public may find information collected under TSCA useful to engage in dialogues with chemical companies about reducing chemical risks and limiting chemical exposures at nearby facilities that produce or use toxic chemicals. While EPA believes that some claims of confidential business information may be unwarranted, challenging the claims is resource-intensive. According to a 1992 EPA study, the latest performed by the agency, problems with inappropriate claims were extensive. This study examined the extent to which companies made confidential business information claims, the validity of the claims, and the impact of inappropriate claims on the usefulness of TSCA data to the public. The study found that many of the confidentiality claims submitted under TSCA were not appropriate, particularly for health and safety data. For example, between September 1990 and May 1991, EPA reviewed 351 health and safety studies that chemical companies submitted with a claim of confidentiality. EPA challenged the confidentiality claimed for 77, or 22 percent of the studies and, in each case, the submitter amended the confidentiality claim when challenged by EPA. Currently, while EPA may suspect that some chemical companies’ confidentiality claims are unwarranted, the agency does not have data on the number of inappropriate claims. As we reported in June 2005, EPA focuses on investigating primarily those claims that it believes may be both inappropriate and among the most potentially important—that is, claims relating to health and safety studies performed by chemical companies. According to the EPA official responsible for initiating challenges to confidentiality claims, the agency challenges about 14 such claims each year, and the chemical companies withdraw nearly all of the claims challenged. Chemical companies have expressed interest in working with EPA to identify ways to enable other organizations to use the information given the adoption of appropriate safeguards. In addition, chemical company representatives told us that, in principle, they have no concerns about revising TSCA or EPA regulations to require that confidentiality claims be periodically reasserted and reviewed. However, neither TSCA nor EPA regulations require periodic reviews to determine when information no longer needs to be protected as confidential. In our June 2005 report, we recommended that EPA revise its regulations to require that companies reassert claims of confidentiality submitted to EPA under TSCA within a certain time period after the information is initially claimed as confidential. In July 2006, EPA responded to Congress that the agency planned to initiate a pilot process, using its existing authorities, to review selected older submissions containing CBI claims. According to EPA officials, the agency is examining PMNs and notices of commencements submitted to EPA from fiscal years 1993 thorough March 2007 and plans to compile statistics on the numbers and percentages of submissions and the types of CBI claims made. Based on the agency’s review, and in light of its other regulatory priorities, EPA will consider whether rule making is appropriate to maximize the benefits of a reassertion program, including benefits to the public. However, no completion date has been determined for the pilot. Similar to TSCA, REACH has provisions to protect information claimed by chemical companies as confidential or sensitive, including trade secret formulas and production volumes. In addition, REACH treats some information as confidential, including the following, even if a company did not claim it as confidential: (1) details of the full composition of the chemical’s preparation; (2) the precise use, function, or application of the chemical or its preparation; (3) the precise tonnage or volume of the chemical manufactured or placed on the market; or (4) relationships between manufacturers/importers and downstream users. In exceptional cases where there are immediate risks to human health and safety or to the environment, REACH authorizes the European Chemicals Agency to publicly disclose this information. Furthermore, unlike TSCA, REACH places substantial restrictions on the types of data that chemical companies may claim as confidential. Consistent with one of the key objectives of REACH, the legislation makes information on hazardous chemicals widely available to the public by limiting the types of hazard information that chemical companies may claim as confidential, and generally does not allow confidentiality claims related to, among other things, guidance on the chemical’s safe use, and the chemical’s physical chemical properties, such as melting and boiling points, and results of toxicological and ecotoxicological studies, including analytical methods that make it possible to detect a dangerous substance when discharged into the environment and to determine the effects of direct exposure to humans. In addition, other information, such as study summaries and tonnage band information will be available unless the chemical companies justify that disclosing the information will be harmful to its commercial interests. REACH also requires that safety data sheets for PBTs and vPvBs and other chemicals classified as dangerous be provided to ensure that commercial users—known as downstream users and distributors of a chemical, as well as chemical manufacturers and importers, have the information they need to safely use chemicals. The data sheets, which chemical companies are required to prepare, include information on health, safety, and environmental properties, and risks and risk management measures. Similar to TSCA, REACH requires public disclosure of health and safety information and has provisions for making information available to the public. REACH also includes a provision for public access to basic chemical information, including brief profiles of hazardous properties, labeling requirements, authorized uses, and risk management measures. The European Union’s rules regarding the public’s access to information combine a variety of ways that the interests of the public’s right to know is balanced with the need to keep certain information confidential. As such, nonconfidential information will be published on the chemical agency’s Web site. However, some types of information are always to be treated as confidential under REACH, such as precise production volume. REACH also includes a provision under which confidential information can generally be shared with government authorities of other countries or international organizations under an agreement between the parties provided that the following conditions are met: (1) the purpose of the agreement is cooperation on implementation or the management of legislation concerning the chemicals covered by REACH and (2) the foreign government or international organization protects the confidential information as mutually agreed. In our June 2005 report, we suggested that Congress should consider amending TSCA to authorize EPA to share with the states and foreign governments the confidential business information that chemical companies provide to the agency, subject to regulations to be established by EPA in consultation with the chemical industry and other interested parties that would set forth the procedures to be followed by all recipients of the information in order to protect the information from unauthorized disclosures. Furthermore, chemical industry representatives told us that chemical companies would not object to Congress revising TSCA to allow those with a legitimate reason to obtain access to the confidential business information provided that adequate safeguards exist to protect the information from inappropriate disclosures. In addition, EPA officials said that harmonized international chemical assessments would be improved if the agency had the ability to share this information under appropriate procedures to protect confidentiality. Substantial differences exist between TSCA and REACH in their approaches to obtaining the information needed to identify chemical risks; controlling the manufacture, distribution, and use of chemicals; and providing the public with information on harmful chemicals. Assuming that the EU has the ability to review chemical information in a timely manner, specific provisions under REACH provide a means for addressing long-standing difficulties experienced both under TSCA and previous European chemicals legislation in (1) obtaining information on chemicals’ potentially harmful characteristics and their potential exposure to people and the environment and (2) making the chemical industry more accountable for ensuring the safety of their products. Furthermore, REACH is structured to provide a broader range of data about chemicals that could enable people to make more informed decisions about the products they use in their everyday lives. We have identified, in our previous reports on TSCA, various potential revisions to the act that could strengthen TSCA to obtain additional chemical information from the chemical industry, shift more of the burden to chemical companies for demonstrating the safety of their chemicals, and enhance the public’s understanding of the risks of chemicals to which they may be exposed. We provided EPA and the Environment Counselor for the Delegation of the European Commission to the United States a draft of this report for review and comment. Both EPA and the Environment Counselor for the Delegation of the European Commission provided technical comments, which we have incorporated into this report as appropriate. EPA also provided written comments. EPA highlighted the regulatory actions it has taken under TSCA and noted that TSCA is a “fully implemented statute that has withstood the test of time” and that, in contrast, “REACH is not yet in force, and there is no practical experience with any aspect of its implementation.” Furthermore, while EPA agreed that it is possible to compare the approaches used to protect against the risks of toxic chemicals under TSCA and REACH, “it is not yet possible to evaluate or compare the effectiveness of the different chemical management approaches or requirements.” EPA’s written comments are presented in appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the congressional committees with jurisdiction over EPA and its activities; the Administrator, EPA; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to describe how Toxic Substances Control Act (TSCA) compares with Registration, Evaluation and Authorization of Chemicals (REACH) in its approaches to (1) identifying chemicals harmful to public health and the environment, (2) controlling chemical risks, and (3) disclosing chemical data to the public while protecting confidential business information. In addressing these issues, we also obtained information on Environmental Protection Agency’s (EPA) voluntary chemical control programs that complement TSCA. We reviewed the relevant provisions of TSCA, identified and analyzed EPA’s regulations on how the new and existing chemical review and control programs work, including the handling of confidential information, and determined the extent of actions taken by EPA to control chemicals. These efforts were augmented by interviews with EPA officials in the agency’s Office of Pollution Prevention and Toxics (OPPT), the EPA office with primary responsibility for implementing TSCA, the High Production Volume (HPV) Challenge Program, and the Voluntary Children’s Chemical Evaluation Program (VCCEP) pilot. In addition, we interviewed representatives of the American Chemistry Council (a national chemical manufacturers association), Environmental Defense (a national, nonprofit, environmental advocacy organization), and the Synthetic Organic Chemical Manufacturers Association (a national, specialty chemical manufacturer’s association). We also attended meetings of EPA’s National Pollution Prevention and Toxics Advisory Committee (NPPTAC) and attended various conferences sponsored by EPA and others. We selected the industry and environmental experts we interviewed based on discussions with NPPTAC representatives and based on our prior work on TSCA. Finally, we obtained and reviewed EPA documents related to its chemical program. For reviewing REACH, we obtained laws, technical literature, and government documents that describe the European Union’s (EU) chemical control program. We also interviewed EU officials who helped develop and who will be involved in implementing REACH, including the Environment Counselor for the Delegation of the European Commission to the United States and representatives from the European Commission and the European Parliament. Our descriptions of these laws are based on interviews with government officials and written materials they provided. In addition, we interviewed representatives of the American Chamber of Commerce to the EU, American Chemistry Council (a national chemical manufacturers association), Environmental Defense (a national, nonprofit environmental advocacy organization), the European Chemical Industry Council (an EU chemical manufacturers association), the European Environmental Bureau (a federation environmental advocacy organization based in the EU Member States), and the Synthetic Organic Chemical Manufacturers Association (a national, specialty chemical manufacturer’s association). Furthermore, we interviewed staff from the U.S. Mission to the EU. Finally, for the purposes of this report, we compared TSCA to the REACH legislation that was approved in December 2006, as the basis for analysis. Our review was performed between January 2006 and May 2007 in accordance with generally accepted government auditing standards. New chemicals are those not on the TSCA inventory. Existing chemicals are those listed in the TSCA Inventory. REACH creates a single system so that there will be virtually no distinction between new and existing chemicals. Originally 62,000. Of the more than 82,000 chemicals currently in the TSCA inventory, approximately 20,000 were added to the inventory since EPA began reviewing chemicals in 1979. EU officials estimated the number of chemicals with production or import levels of at least 1 metric ton (2,205 pounds) to be about 30,000. Chemical registration will be phased in over 11 years after enactment of REACH. Companies are required to notify EPA prior to manufacturing a new chemical. Companies notify EPA of its intent to manufacture a new chemical through submission of a Premanufacture Notice (PMN) or of an application for exemption. After the PMN review period has expired and within 30 days of the chemical’s manufacture, companies submit a Notice of Commencement of Manufacture or Import to EPA. The chemical is then added to the TSCA Inventory, and the chemical is classified as an existing chemical. TSCA generally does not require chemical companies to notify EPA of changes in use or production volume. However, every 5 years companies are required to update EPA on information such as the processing, use, and production volume of chemicals produced at over 25,000 pounds. In general, REACH treats new and existing chemicals the same. Chemical companies register chemicals with the European Chemicals Agency once production or import of a chemical reaches 1 metric ton (2,205 pounds). Companies must also notify EPA if the company obtains information that reasonably supports the conclusion that the chemical presents a substantial risk to human health or the environment. After registration, companies are required to immediately notify the European Chemicals Agency of significant changes in use or production volumes of the registered chemical. Based on information compiled through a series of steps, including a chemical review strategy meeting, structure-activity relationship analysis, and exposure-based reviews, EPA makes a decision ranging from “dropping” a chemical for further review to banning a chemical pending further information. TSCA does not require EPA to systematically prioritize and assess existing chemicals. The European Chemicals Agency will develop the criteria for prioritizing chemicals for further review based on, among other things, hazard data, exposure data, and production volume. However, TSCA established an Interagency Testing Committee—an advisory committee created to identify chemicals for which there are suspicions of toxicity or exposure and for which there are few, if any ecological effects, environmental fate, or health-effects testing data— to recommend chemicals to which EPA should give priority consideration in promulgating test rules. Member states may use these criteria when developing their list of chemicals to be reviewed. EPA also plans to use the High Production Volume (HPV) Challenge Program and the information under the Inventory Update Rule to help the agency prioritize the chemicals it will review. New chemicals once they have commenced manufacture are added to the TSCA Inventory. Such former new chemicals can be subject to significant new use rules (SNUR) or restrictions on the manufacture, processing, distribution in commerce, use, or disposal of the chemical under TSCA 5(e) consent orders. Chemical companies report use information once every 5 years under TSCA’s Inventory Update Rule (IUR), which is primarily used to gather certain information on chemicals produced at the threshold of 25,000 pounds or more. Chemical companies must immediately inform the European Chemicals Agency in writing of new uses of the chemical about which the company may reasonably be expected to have become aware. However, in the absence of a SNUR on a particular chemical, there is no requirement for chemical companies to notify EPA of significant new uses of existing chemicals in the intervening years or for chemicals produced at less than 25,000 pounds. Manufacturers and processors of existing chemicals subject to a SNUR must notify EPA 90 days before manufacture of or processing for significant new use. Chemical companies are not required to perform risk assessments on the risks of new chemicals. However, if a company has voluntarily performed risk assessments, they must submit these data with the PMN. Chemical companies are not required to complete assessments on the risks of existing chemicals. However, TSCA requires chemical companies to notify EPA immediately of new unpublished information on chemicals that reasonably supports a conclusion of substantial risk. Chemical companies must conduct a risk assessment in addition to European Chemicals Agency review for all chemicals produced at a level of 1 ton or more per year. Additionally, chemical companies must conduct a chemical safety assessment for all chemicals produced at a level of 10 tons or more per year. TSCA contains no specific language relating to reducing animal testing. However, according to EPA officials, TSCA’s approach of not requiring companies to test new chemicals for health hazards or environmental effects absent EPA action, combined with EPA’s use of Structure Activity Relationship (SAR) analysis reduces the need for animal testing compared with requiring a base set of data without the use of SAR analysis. No specific language relating to reducing animal testing. However, under the HPV Challenge Program, EPA encourages companies to consider approaches, such as using existing data, sharing data, and using SAR and read across approaches that would reduce the amount of animal testing needed. Further, EPA does not require retesting for chemicals with adequate Screening Information Data Sets data. EPA has expressed its commitment to examining alternate test methods that reduce the number of animals needed for testing, that reduce pain and suffering to test animals or that replaces test animals with validated in vitro (nonanimal) test systems. REACH states that testing on vertebrate animals for the purposes of regulation shall be undertaken as a last resort. To reduce the amount of animal testing, REACH encourages the sharing and joint submission of information. REACH implementation guidance encourages the use of SAR and read across approaches. Further, registrants may use any study summaries or robust study summaries performed within the 12 previous years by another manufacturer or importer to register after due compensation of the costs to the owner of the data. In addition, under the Voluntary Children’s Chemical Evaluation Program (VCCEP), EPA encouraged participating companies to reduce or eliminate animal testing. Chemical companies must provide EPA a reasonable third year estimate of the total production volume of a new chemical at the time a PMN is submitted. Chemical companies report production quantities every 5 years for those chemicals on the TSCA inventory and produced at quantities of 25,000 pounds or more through the Inventory Update Rule (IUR). Chemical companies must include information on the overall manufacture or import of a chemical in metric tons per year in a technical dossier with their registration. Chemical companies must immediately report any significant changes in the annual or total quantities manufactured or imported. No specific requirement relating to downstream users. No specific requirement relating to downstream users. Assemble and keep available all information required to carry out duties under REACH for a period of at least 10 years after the substance has been used. Prepare a chemical safety report for any use outside the conditions described in an exposure scenario or if appropriate use and exposure category described in a safety data sheet or for any use the supplier advises against. Downstream users may also provide information to assist in the preparation of a registration. EPA can issue a proposed order or seek a court injunction to prohibit or limit the manufacture, processing, distribution in commerce, use, or disposal of a chemical if EPA determines that there is insufficient information available to permit a reasoned evaluation of the health and environmental effects of a chemical and that (1) in the absence of such information, the chemical may present an unreasonable risk of injury to health or the environment or (2) it is or will be produced in substantial quantities and (a) it either enters or may reasonably be anticipated to enter the environment in substantial quantities or (b) there is or may be significant or substantial human exposure to the substance. TSCA requires EPA to apply regulatory requirements to chemicals for which EPA finds a reasonable basis to conclude that the chemical presents or will present an unreasonable risk to human health or the environment. To adequately protect against a chemical’s risk, EPA can promulgate a rule that bans or restricts the chemical’s production, processing, distribution in commerce, use or disposal, or requires warning labels be placed on the chemical. Chemicals may be regulated under provisions known as authorization and restriction. Authorization is required for the use of substances of very high concern. This includes substances that are (1) carcinogenic, mutagenic, or toxic for reproduction; (2) persistent, bioaccumulative, and toxic or very persistent and very bioaccumulative; or (3) identified as causing serious and irreversible effects to humans or the environment, such as endocrine disrupters. Section 6(a) authorizes EPA to regulate existing chemicals, including restriction or prohibition. EPA is required to apply the least burdensome requirement and the rule must be supported by substantial evidence in the rule- making record. Restrictions on substances relating to its manufacture, marketplace, or use, including banning, may be required where there is an unacceptable risk to health or the environment. EPA maintains compliance officials to monitor compliance with TSCA. EPA maintains compliance officials to monitor compliance with TSCA. Reach requires EU Member States to monitor compliance with provisions of REACH. No specific language relating to substitution or finding safer alternatives. No specific language relating to substitution or finding safer alternatives. Authorization applications (for chemicals of very high concern) require an analysis of possible alternatives or substitutes. TSCA allows companies to make confidentiality claims on nearly all information it provides EPA. TSCA allows companies to make confidentiality claims on nearly all information it provides to EPA. REACH allows chemical companies to make confidentiality claims; however, it places restrictions on what kinds of information companies may claim as confidential. TSCA requires that existing health and safety-related information must be made available to the public. TSCA requires that existing health and safety-related information must be made available to the public. EPA uses its HPV Challenge Program to voluntarily gather information from industry and ensure that a minimum set of basic data on approximately 2,800 high- production-volume-chemicals is available to the public. REACH requires public disclosure of information such as the trade name of the substance, certain physicochemical data, guidance on safe use, and all health and safety-related information. No specific language relating to children’s health. No specific language relating to children’s health. No specific language relating to children’s health. However, under the TSCA Inventory Update Reporting Regulation of December 2005, manufacturers of chemicals in volumes of 300,000 pounds or more must report use in or on products intended for use by children. As requested, we identified a number of options that could strengthen EPA’s ability under the TSCA to assess chemicals and control those found to be harmful. These options have been previously identified in earlier GAO reports on ways to make TSCA more effective. Representatives of environmental organizations and subject matter experts subsequently concurred with a number of these options and commented on them in congressional testimony. These options are not meant to be comprehensive but illustrate actions that the Congress could take to strengthen EPA’s ability to regulate chemicals under TSCA. The Congress may wish to consider revising TSCA to place more of the burden on industry to demonstrate that new chemicals are safe. Some of the burden could be shifted by requiring industry to test new chemicals based on substantial production volume and the necessity for testing, and to notify EPA of significant increases in production, releases, and exposures or of significant changes in manufacturing processes and uses after new chemicals are marketed. To put existing chemicals on a more equal footing with new chemicals, the Congress could consider revising TSCA to set specific deadlines or targets for the review of existing chemicals. These deadlines or targets would help EPA to establish priorities for reviewing those chemicals that, on the basis of their toxicity, production volumes, and potential exposure, present the highest risk to health and the environment. The Congress could also consider revising TSCA to shift more of the burden for reviewing existing chemicals to industry. If more of the responsibility for assessing existing chemicals was shared by industry, EPA could review more chemicals with current resources. In deciding how much of the burden to shift to industry, the Congress would need to consider the extent to which providing data to show that chemicals are safe should be a cost of doing business for the chemical industry. To ensure that EPA can implement its initiatives without having to face legal challenges and delays, the Congress may wish to consider revising TSCA to provide explicit authority for EPA to enter into enforceable consent agreements under which chemical companies are required to conduct testing, clarify that health and safety data cannot be claimed as confidential require substantiation of confidentiality claims at the time that the claims are submitted to EPA, limit the length of time for which information may be claimed as confidential without reaffirming the need for confidentiality, establish penalties for the false filing of confidentiality claims, and authorize states and foreign governments to have access to confidential business information when they can demonstrate to EPA that they have a legitimate need for the information and can adequately protect it against unauthorized disclosure. Once a company begins production of a chemical, it is placed on the TSCA Inventory and is classified as an existing chemical. For the HPV Challenge Program, only one of the three tests of oral route, inhalation, or dermal route are required. For REACH, the oral route test is the only one required at one ton or above and all three (oral, inhalation, and dermal) are required at 10 tons or above. These tests may be required at production volumes of 1 million pounds (about 454 tons) or more. Three biotic degradation tests are specified: simulation testing on ultimate degradation in surface water; soil simulation testing (for substances with a high potential for adsorption to soil); and sediment simulation testing (for substances with a high potential for adsorption to sediment). The choice of the appropriate test(s) depends on the results of the chemical safety assessment. In addition to the individual named above, David Bennett, John Delicath, Richard Johnson, Valerie Kasindi, Ed Kratzer, and Tyra Thompson made key contributions to this report.
Chemicals play an important role in everyday life. However, some chemicals are highly toxic and need to be regulated. In 1976, the Congress passed the Toxic Substances Control Act (TSCA) to authorize the Environmental Protection Agency (EPA) to control chemicals that pose an unreasonable risk to human health or the environment, but some have questioned whether TSCA provides EPA with enough tools to protect against chemical risks. Like the United States, the European Union (EU) has laws governing the production and use of chemicals. The EU has recently revised its chemical control policy through legislation known as Registration, Evaluation and Authorization of Chemicals (REACH) in order to better identify and mitigate risks from chemicals. GAO was asked to review the approaches used under TSCA and REACH for (1) requiring chemical companies to develop information on chemicals' effects, (2) controlling risks from chemicals, and (3) making information on chemicals available to the public. To review these issues, GAO analyzed applicable U.S. and EU laws and regulations and interviewed U.S. and EU officials, industry representatives, and environmental advocacy organizations. GAO is making no recommendations. REACH requires companies to develop information on chemicals' effects on human health and the environment, while TSCA does not require companies to develop such information absent EPA rule-making requiring them to do so. While TSCA does not require companies to develop information on chemicals before they enter commerce (new chemicals), companies are required to provide EPA any information that may already exist on a chemical's impact on human health or the environment. Companies do not have to develop information on the health or environmental impacts of chemicals already in commerce (existing chemicals) unless EPA formally promulgates a rule requiring them to do so. Partly because of the resources and difficulties the agency faces in order to require testing to develop information on existing chemicals, EPA has moved toward using voluntary programs as an alternative means of gathering information from chemical companies in order to assess and control the chemicals under TSCA. While these programs are noteworthy, data collection has been slow in some cases, and it is unclear if the programs will provide EPA enough information to identify and control chemical risks. TSCA places the burden of proof on EPA to demonstrate that a chemical poses a risk to human health or the environment before EPA can regulate its production or use, while REACH generally places a burden on chemical companies to ensure that chemicals do not pose such risks or that measures are identified for handling chemicals safely. In addition, TSCA provides EPA with differing authorities for controlling risks, depending on whether the risks are posed by new or existing chemicals. For new chemicals, EPA can restrict a chemical's production or use if the agency determines that insufficient information exists to permit a reasoned evaluation of the health and environmental effects of the chemical and that, in the absence of such information, the chemical may present an unreasonable risk. For existing chemicals, EPA may regulate a chemical for which it finds a reasonable basis exists to conclude that it presents or will present an unreasonable risk. Further, TSCA requires EPA to choose the regulatory action that is least burdensome in mitigating the unreasonable risk. However, EPA has found it difficult to promulgate rules under this standard. Under REACH, chemical companies must obtain authorization to use chemicals that are listed as chemicals of very high concern. Generally, to obtain such authorization, chemical companies need to demonstrate that they can adequately control risks posed by the chemical or otherwise ensure that the chemical is used safely. TSCA and REACH both have provisions to protect information claimed by chemical companies as confidential or sensitive business information but REACH requires greater public disclosure of certain information, such as basic chemical properties, including melting and boiling points. In addition, REACH places greater restrictions on the kinds of information chemical companies may claim as confidential.
The statutory and regulatory framework for improving access to services for LEP persons stems from Title VI of the Civil Rights Act of 1964, an executive order, DOJ regulations and guidance, and DOT regulations and guidance. Section 601 of Title VI provides that no person shall “on the ground of race, color, or national origin, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance.” Section 602 of Title VI directs federal agencies to implement section 601 of the act by issuing rules, regulations, or orders. In its efforts to implement section 601, DOJ has issued regulations that bar unjustified disparate impact on the basis of national origin. On August 11, 2000, President Clinton issued Executive Order 13166 to improve access to federally conducted and federally assisted programs and activities for persons who, as a result of national origin, are limited in their English proficiency. The order encouraged all federal agencies to take steps to ensure that any recipients of federal financial assistance under their purview provide meaningful access to their LEP applicants and beneficiaries. The order further requires that each federal agency providing federal financial assistance to prepare guidance specifically tailored to its recipients. The agencies’ guidance must then be reviewed and approved by DOJ before being issued. DOJ released guidance in 2000 that set forth general principles for federal agencies to apply to ensure that their programs and activities provide reasonable access to LEP persons and, thus, do not discriminate on the basis of national origin. The DOJ guidance explains that, with respect to federally assisted programs and activities, Executive Order 13166 “does not create new obligations, but rather, clarifies existing Title VI responsibilities.” Although Title VI and its implementing regulations require that recipients take reasonable steps to ensure meaningful access by LEP persons, federal agencies’ LEP guidance recognize that each situation is fact-specific, and that it would not make sense for the guidance to mandate specific approaches to comply with Title VI. Rather, the purpose of federal agencies’ guidance is to provide recipients with a framework for assessing their obligations under Title VI, while maintaining flexibility for the recipients to determine how best to comply with those obligations. Thus, the guidance outlines steps federal-funds recipients can take to avoid administering programs in a way that results in discrimination on the basis of national origin, which would be in violation of Title VI regulations. In general, the test for assessing the existence of national origin discrimination on the basis of language under Title VI is to determine whether the failure to provide a service in a language that a recipient understands will prevent the recipient from receiving essentially the same level of service benefit as an English speaker. DOJ’s guidance established a four-factor analysis to help determine the extent of a funding recipient’s obligation to provide LEP services. These four factors are (1) the number or proportion of LEP persons eligible to be served or likely to be encountered by the program or grantee; (2) the frequency with which LEP persons come in contact with the program; (3) the nature and importance to people’s lives of the program, activity, or service provided by the grantee; and (4) the resources available to the grantee and costs. According to DOJ, the intent of the analysis is to suggest a balance that ensures meaningful access by LEP persons to critical services, while not imposing undue burdens on small businesses, local governments, or nonprofits. DOT issued its guidance in 2001. This guidance was generally consistent with DOJ’s guidance but included three additional factors, as well as the four factors previously outlined, suggesting that funding recipients should also consider (1) the level of services provided to fully English-proficient people; (2) whether LEP persons are being excluded from services, or are being provided a lower level of services; and (3) whether the agency has adequate justification for restrictions, if any, on special language services. The guidance states that such restrictions would be accepted only in rare circumstances. On the basis of public comments, DOT subsequently revised its guidance, and the revised guidance was approved by DOJ on August 25, 2005. DOT is currently preparing to publish and release its revised guidance. In addition to describing factors that funding recipients should consider in assessing their obligations to provide LEP services, DOT’s guidance outlines several key components to an effective language access program, stating that grantees should (1) conduct an assessment of the language groups within their service areas and the language needs of these groups; (2) develop and implement written plans outlining their strategies for ensuring access to services for LEP populations; (3) make staffs aware of the LEP access plan, and train the staffs and provide them with the tools necessary to carry out the plan; (4) ensure that language access services are actually provided in a consistent manner, and that LEP populations are aware of the services; and (5) develop monitoring programs that allow grantees to assess the success of their LEP access programs and to identify needed modifications. These five steps are designed to help DOT grantees ensure that they are not administering their programs in a way that results in discrimination in violation of Title VI. Several offices within DOT, particularly the Office of Civil Rights within FTA, have responsibility for ensuring that transit operators and transportation planning entities receiving DOT funds are in compliance with Title VI and responsibility for monitoring and overseeing their language access activities. The types of language access services provided by the transit agencies and MPOs we visited included translated service brochures, multilingual telephone lines, translated Web sites, bilingual customer service staffs, and a host of other services. However, the effects and costs of these services are largely unknown. The extent of language access provided varied across the areas we visited during our case studies, and services provided often varied across agencies within the same metropolitan area. Almost all of the transit agencies and MPOs we visited provided at least some language access services in Spanish, the largest LEP language group, and some agencies provided services in other languages. Little is known about the effects of these services on improving access to public transportation and the transportation planning and decision-making process for LEP populations, but community and advocacy groups in the areas we visited identified several gaps in the language access services provided by agencies, such as a lack of awareness in the community about the services available. Given such problems, community groups told us that more proactive agency outreach to LEP communities to determine specific needs and advertise existing services might improve the effectiveness of language access services, whereas a lack of outreach and poor publicizing of available services could likely reduce the impact and utilization of the materials and services provided. One agency cited the positive benefits it received by improving its outreach to non-English-speaking populations, including increased ridership and enhanced public support for the agency. Little is also known about the costs of providing such services, and most agencies saw the language access they provide as a cost of doing business as opposed to an additional cost; however, agencies told us that costs could become prohibitive if services were substantially expanded or provided in several additional languages. During our case studies, we found that providing language access to LEP populations can be incorporated into all of the different ways in which transit agencies and MPOs communicate with the public, not only regarding the transportation services they provide but regarding how agencies provide LEP communities with access to the transportation planning and decision-making process. Transit riders and potential transit riders may need a variety of different types of information to plan their trips, use the transit system, and participate in the transportation planning and decision-making process. For example, potential riders may need to know about the existence of available services, destinations, and travel options, and about time schedules, route options, and transfer policies. When in the transit system, riders may need to know where stops are located, whether service changes have occurred, about available fare and payment options, and about emergency and safety information. Riders may also need confirmation that they are on the right route or are exiting at the correct stop. To participate in the transportation planning and decision- making process, individuals need to know how the process works, what is the purpose and effect of their participation, and when and where public meetings are being held, in addition to needing to be able to understand the proceedings of public meetings and to make statements and participate in those discussions. To provide such access to LEP populations, transit agencies and MPOs employed a host of different communication strategies, including the following: providing bilingual or multilingual telephone services; translating written materials; translating signs or notices posted at stations, at stops, or on vehicles; providing in-person language assistance through drivers, interpreters, or multilingual customer service staffs; advertising in other languages on television, on radio, or in newspapers; translating materials on their Web sites; translating recorded announcements or electronic signs; or making ticket machines accessible in other languages. In providing language access, the agencies in each of the areas we visited faced different challenges. In North Carolina and northwest Arkansas, agencies are facing a substantial recent growth in the size of the Spanish- speaking population. (See app. I for more information on the size and growth of LEP populations in these two areas.) In parts of California—the San Francisco Bay Area and the Los Angeles and Orange County areas— and in Chicago, Illinois, the predominance of a number of Asian and other language groups, in addition to a large percentage of Spanish-speakers, presents further challenges. Agencies in Austin, Texas, have also experienced growth in Asian languages spoken in the area. Figure 3 shows the percentages of the transit agencies and MPOs we visited that provided services in at least Spanish for each of these communication strategies. However, in some cases, agencies may not utilize these communication strategies, even in English, and these agencies are not included in the percentage calculation. The following sections discuss transit agency and MPO activities within each of the broad categories shown in figure 3, and highlight examples from the seven metropolitan statistical areas we visited. Following the discussion of these activities, we further discuss agencies’ community outreach activities related to LEP populations and to the community and advocacy groups that represent them. All but 1 of the 20 transit agencies we visited had at least some telephone operators who were bilingual in English and Spanish, but the availability of telephone information in other languages varied. In contrast, a survey of 32 transit agencies conducted for the New Jersey Department of Transportation found that only one-half of responding agencies used multilingual telephone lines or bilingual or multilingual persons in call centers. A few transit agencies we visited in highly diverse areas, such as San Francisco and Los Angeles, had operators fluent in other languages. For example: The Metropolitan Transportation Authority in Los Angeles and San Francisco’s Municipal Transportation Agency have operators that speak Tagalog and Chinese. The Bay Area Rapid Transit has Chinese-speakers available in its call center. In other cases, telephone services were not language accessible. For example, the San Francisco Bay Area’s 511 traveler information line, which provides information on all of the transportation options available in the area, is currently only accessible in English. Transit agencies in Chicago; Los Angeles; Orange County; and Greensboro, North Carolina, had access to a three-way call translation service in numerous languages. While this service is available through these agencies’ general transit information lines, which are advertised on most agency materials, the fact that translation services are available through the three- way call service is not well publicized. Therefore, LEP persons may not be aware of these translation services. For example, representatives of a Chinese community center in Chicago were not aware that Chinese translators were available through the Chicago Regional Transportation Authority’s language line, although those representatives said they often assist new Chinese immigrants in learning how to use the transit system. In addition, the New Jersey study found, through its surveys and focus groups with LEP persons, that awareness of the existence of the translation services available in New Jersey was very low, although the study found such services to be valued by LEP persons. Some community groups also pointed to the availability of bilingual or multilingual operators as one of the most critical and useful services that agencies can provide to LEP persons. Without such services, LEP persons must rely on family, friends, or other transit riders who speak their language to provide assistance. Transit agencies told us that complaints in other languages could also be taken through their bilingual or multilingual telephone services; many agencies had received complaints in languages other than English, primarily in Spanish. However, specific complaints about language access were rare, with only 1 agency reporting such a complaint in relation to a rider’s having trouble communicating with a driver. In some areas we visited, other nontransportation agencies receiving federal financial assistance also had contracts for multilingual telephone translation services. Because those agencies also are subject to the executive order and federal agency LEP guidance, the existence of such contracts presents an opportunity for local agencies to coordinate in order to more efficiently provide such services. Few of the transit agencies or MPOs we visited had coordinated with any other nontransportation agencies in their service areas in this regard. However, in North Carolina, transit agencies in Raleigh, Durham, Chapel Hill, and Greensboro all have relationships with other city departments that can assist with language access needs, such as sharing bilingual operators. All but 2 of the 20 transit agencies we visited printed at least some schedules and maps, how-to-ride guides, applications for specialized transportation, or other service information materials in Spanish, and many transit agencies provided extensive amounts of printed materials in Spanish. (See fig. 4 for a sample of a translated service information brochure.) In addition, the New Jersey survey of 32 transit agencies found that two-thirds of responding agencies provided translated timetables and route maps. However, officials at 3 transit agencies indicated that they often do not translate the language on maps and schedules because most of the information consists of numbers, which are universal. Seven transit agencies we visited also provided selected guides and maps in languages other than Spanish that are prevalent in their service areas, and 4 agencies are able to provide translated materials upon request. Some examples include the following: The Alameda-Contra Costa Transit District in the San Francisco Bay Area regularly prints service information in Spanish and Chinese. Also in the San Francisco Bay Area, the Bay Area Rapid Transit’s rider’s guide is printed in Spanish and Chinese. On request, the Los Angeles County Metropolitan Transportation Authority can provide information in several other languages, although the agency acknowledged that such requests were very rare. The agency also produced informational brochures in Chinese to advertise the opening of its Gold Line light-rail service, which passes through Chinatown in downtown Los Angeles. Some community groups we spoke with indicated that, if service information materials are not translated, many LEP transit riders will likely learn to use the system from family, friends, or others in their community. However, a lack of translated printed materials may discourage use of the system or participation in the transportation planning and decision-making process by affected language groups. Officials at 1 agency told us that providing information in the language the community is most comfortable with sends a message that they are welcome on the system and in the planning process, while not doing so may send the message that they are unwelcome. Community groups also told us that more translated service information could encourage greater ridership and make the system more welcoming to LEP persons. In addition, the New Jersey study found that, next to having a staff person speaking their native language, LEP groups most preferred to have timetable, schedule, and other information in their native language. While MPOs can serve a variety of functions and may provide a wide variety of services related to transportation, we specifically focused on informational materials related to transportation planning and public involvement provided by MPOs we visited. Three of the 7 MPOs we visited had translated a summary of their transportation plan into Spanish, with 1 MPO, the Metropolitan Transportation Commission in the San Francisco Bay Area, also translating the document into Chinese. Two MPOs had translated a citizen’s guide to participation in the transportation planning process into Spanish. Another MPO had translated a transportation needs survey into Spanish. Transit agencies we visited provided several different types of translated signs in vehicles or at stations and stops. Of the 4 agencies out of 20 that did not have such signs, 2 were primarily paratransit operators whose vehicles are operated by contractors. The types of translated signs provided included basic service information on bus stop signs, postings of service changes, fare box signs, emergency exit and priority-seating signs, public meeting notices, and posters for informational campaigns. Without translated postings of service changes, bus stop closures, or fare policies, LEP persons are at a disadvantage in accessing the transit system. One community group cited an instance of LEP persons waiting at a bus stop that had been closed due to a city event. This situation occurred because the transit agency had not posted translated notices at the bus stop announcing the closures. Of the transit agencies we visited, 8 had some basic service information signs at rail stations or bus stops available in languages other than English, and 1 agency we visited had such information available in languages other than Spanish at selected bus stops. For example, Transportation Authorities in Orange County and Los Angeles provide some information at some bus stops in Spanish (such as the direction of travel and information on their telephone lines). One agency, the Alameda-Contra Costa Transit District in Oakland, estimates that approximately 750 of its 1,200 signs are translated in Chinese and Spanish, with signs in bus shelters in the city of Oakland, California, now being replaced with seven-language signs, an example of which is shown in figure 5. Officials at 3 transit agencies stated that they had not translated street signs, or did not translate the entire sign, because much of the information is numeric and because including several languages on such signs would become unwieldy for transit riders to effectively use. Agency officials also indicated that cost could become an issue in replacing all of the signs throughout their systems, and some agencies were looking into utilizing more pictograms in order to avoid the use of multiple languages while providing more universal access. However, some community group representatives told us that, although the use of pictograms can be a useful way to communicate with non-English speakers, some translated language may need to accompany the pictograms in order for the information to be communicated effectively. Several of the transit agencies we visited posted or provided, in languages other than English, information on service changes or closures at rail stations, at bus stops, and in vehicles. Some examples include the following: The Orange County Transportation Authority puts service change flyers in English and Spanish in vehicles on affected bus routes. The Golden Gate Transit in San Francisco posts Spanish and English service change notices at its central transit hub. The Alameda-Contra Costa Transit District provides service change brochures in Chinese and Spanish. Ten transit agencies had on-board signs that included information on fares or emergency exits and priority-seating signs for elderly and disabled persons, and 10 agencies posted public meeting notices on their vehicles, translated into at least Spanish. A few agencies also provided fare information or posted public meeting notices on buses or in stations in other languages. For example: The San Francisco Municipal Transportation Agency and the Alameda- Contra Costa Transit District both provide fare information in Chinese and Spanish. The San Francisco Municipal Transportation Agency posts some meeting notices on its vehicles in Chinese and English, as shown in figure 6. In addition, some transit agencies we visited had translated other types of signs, such as posters in English and Spanish, generally designed under the auspices of new initiatives or information campaigns. For example, METRA Commuter Rail in Chicago and the Los Angeles County Metropolitan Transportation Authority both placed posters in English and Spanish that highlight safety issues on those systems. Orange County Transportation Authority officials credit the wide acceptance of the agency’s new “no pennies” fare policy to the bilingual “Hasta Luego Pennies” campaign, as shown in figure 7. While all but 3 of the transit agencies we visited had bilingual drivers on staff, some agency officials noted that those drivers are generally not required or instructed to make announcements in other languages and are generally not assigned to routes where their language skills may be useful. Some agency officials indicated that union rules allow drivers to select preferred routes on the basis of seniority. Therefore, there is no indication of the number of bilingual drivers that are utilizing their languages skills, although agency officials knew of individual occurrences. Three agencies we visited—Golden Gate Transit in California; Capital Metro in Austin, Texas; and Chapel Hill Transit in North Carolina—had provided their drivers with useful phrase or word guides in Spanish, an example of which is shown in figure 8. A few other agencies, including the Capital Area Rural Transportation System and the Capital Metro in Austin, Texas, and the Ozark Regional Transit in northwest Arkansas, have bilingual employees available to translate over the radio on the bus. Many of the transit agencies reported that they had some bilingual staffs in customer information booths or ticket offices, although agencies tended not to look for bilingual customer service staffs in particular. Agency officials in several areas stated that customer service personnel have language skills because their employees reflect the ethnic and language diversity of their region. For public meetings related to the transportation planning and decision-making process, 12 transit agencies and 4 MPOs had Spanish interpreters or bilingual employees or board members available if needed at most public meetings, while 6 transit agencies and 3 MPOs had Spanish interpreters available by request. In areas where there is a preponderance of other languages spoken, interpreters in languages other than Spanish were generally provided on a “by-request” basis, although 1 agency reported that it regularly provided Chinese translators. While 16 transit agencies we visited had cultural sensitivity included in their staff training, only 9 provided training or technical assistance to their employees that directly related to LEP issues. The New Jersey survey of transit agencies found that only one-quarter of the responding agencies had training for customer service employees that was specific to LEP service. Five agencies we visited offered free Spanish classes to employees. For instance, Chapel Hill Transit hired a contractor to teach conversational Spanish to supervisors, dispatchers, and those employees who answer telephones during work hours. The agency has not been able to offer the course to drivers because of budgeting issues, since attending the course would be considered part of the drivers’ work week and they would have to be paid overtime. However, the town of Chapel Hill does offer tuition reimbursement to drivers who want to take Spanish classes on their own time. Community groups regularly pointed out the importance of having as many bilingual bus drivers and customer service staff as possible. At a community meeting in Aurora, Illinois, held by the Chicago Area Transportation Study, the need for more bilingual bus drivers was highlighted as a community transportation need. The New Jersey focus groups with LEP travelers also found that the inability to communicate with bus drivers was one of the chief complaints of the LEP travelers in New Jersey. In terms of the availability of interpreters at public meetings, community groups we met with criticized the fact that interpreters are frequently only provided on a “by-request” basis. Agencies generally require that requests be made 3 days in advance of the meeting, but community groups told us that if an agency is advertising the meeting in different languages, as many of the agencies we visited did, they should be prepared to provide access to the proceedings of the meeting in those languages, rather than relying on the public to request translation. Fourteen transit agencies and 6 MPOs we visited posted notices of public meetings in newspapers printed in languages other than English—with 10 posting notices in more than one language. A few agencies posted such notices in as many as five different language newspapers. For example, the Los Angeles County Metropolitan Transportation Authority publishes its “Metro Briefs,” which includes notices of public meetings and other information, in Thai, Korean, Chinese, Armenian, and Spanish language newspapers. Spanish radio and television advertisements were also placed by several agencies, sometimes in relation to ongoing information campaigns, such as rail safety campaigns. For example, METRA Commuter Rail in Chicago advertised its rail safety campaign on television and radio in Spanish. Eleven of the 20 transit agencies we visited had some information on their Web sites that was available in other languages; however, 4 of the 11 made no indication on their home pages that translated materials were available. Of the 7 MPOs we visited, 3 had such translated information posted on their Web sites, and 2 had links on their home pages indicating that translated materials were available. Some examples of translated Web sites include the following: The Alameda-Contra Costa Transit District’s Web site provides basic rider information in Spanish, Vietnamese, and Chinese—the three largest LEP populations in its service area—that is directly accessible through links in those languages on the home page. The Regional Transportation Authority in Chicago, and the Bay Area Rapid Transit and the Golden Gate Transit in San Francisco, have basic transit information available in seven and eight other languages, respectively, indicated by country flag icons on the agencies’ home pages. The languages chosen are not fully reflective of the major LEP groups in these areas, however, because these Web sites also serve tourism purposes. For example, in Chicago, the Regional Transportation Authority’s Web site is translated into French, German, and Japanese, although these are not major LEP groups in the city. However, the site is not accessible in Chinese, although Chinese is the third largest LEP population in Chicago. Four transit agencies and 1 MPO had posted translated materials to their Web sites but did not indicate on the home pages that those materials were available. For example, materials translated into Spanish are posted on the Los Angeles County Metropolitan Transportation Authority’s Web site, but a user must navigate through links that are in English to get to them. Also, the San Francisco Municipal Transportation Agency has part of its Title VI plan translated into Spanish and Chinese, but the user must navigate through at least two links in English to find the translations. Only 1 agency we visited, the Ozark Regional Transit, a small urban operator in northwest Arkansas managed by First Transit, had made its entire Web site accessible in another language, Spanish, as seen in figure 9. A link in Spanish on the home page leads to a fully translated version of the Web site. Furthermore, while many agencies have Web-based trip planners, none of the agencies we visited had made that function fully available in other languages. Translated Web sites were not frequently identified by community groups as being particularly useful for LEP persons because LEP persons often do not have access to the Internet, according to the community group representatives we met with. In addition, the New Jersey study found that LEP focus groups did not often rate translated Web sites as a major resource in addressing mobility needs. However, providing translated information on an agency Web site without indication in that language that it is available is likely to reduce the usefulness of that information to those LEP persons who do have Internet access. Only 3 of the transit agencies we visited had recorded announcements in other languages on their vehicles or at their facilities, although many agencies do not utilize recorded announcements at all. Also, although a few transit agencies employ electronic media, such as televisions or ticker-tape style displays, only 1 provided translated information on its ticker-tape display. Examples of translated recorded announcements include the following: The Capital Metro in Austin provides recorded announcements on its buses in English and Spanish, which are also broadcast outside the bus at bus stops. The Bay Area Rapid Transit has Spanish and Chinese announcements recorded and available for use in the event of an emergency in its train stations or on its trains. The Gold Line light-rail line in Los Angeles has recorded announcements of stops and rider instructions in English and Spanish. Of the transit agencies that utilize electronic ticket machines for rail services—the Chicago Transit Authority, the METRA Commuter Rail in Chicago, the Los Angeles County Metropolitan Transportation Authority, the Bay Area Rapid Transit, and the San Francisco Municipal Transportation Agency—only the Los Angeles County Metropolitan Transportation Authority had some machines accessible in English and Spanish. This agency has installed ticket machines that are accessible in Spanish on a newer light-rail line that passes through a predominantly Hispanic neighborhood, and officials told us they were considering replacing all ticket machines with machines that will be accessible in six to eight languages. One group we met with pointed out that, without translated information on fare discounts and without ticket machines that are language accessible, LEP persons may not be aware of the fare options available to them in the same manner that English speakers would be, potentially leading to LEP persons’ paying more than needed for their trips. Almost all of the transit agencies and MPOs we visited had made at least some effort to communicate more directly with communities and to conduct outreach with LEP communities and the community and advocacy groups that serve LEP persons. For example, in Greensboro, the city recently started a new program with Lutheran Family Services, a community group that works with many LEP persons, to provide an orientation for recent immigrants and refugees to the area. Under the program, city departments identified as having the most public interaction with LEP persons, make an interactive presentation of services provided. These presentations are given in English and simultaneously translated into several languages, including Spanish, Vietnamese, Arabic, and Russian, depending on the availability of translators. The city is also producing a video on its services, including public transit, which will be translated into Spanish and into other languages upon request. In Orange County, the Orange County Transportation Authority conducts a program that includes visiting Spanish-speaking senior centers to inform seniors about the agency and its services. As part of the program, the agency will bring a bus to the centers and walk the seniors through every step of riding the bus, including getting on, paying the fare, and exiting. In addition, 2 agencies reported holding information sessions at bus terminals when service changes or fare adjustments are about to occur. For example, the Durham Area Transit Authority publicizes such information sessions in the Spanish community, and then has translators on hand at bus terminals to explain service changes and answer any questions. In terms of transportation planning and decision making, federal law and regulations require transit agencies and MPOs to involve the public in transportation planning and decision-making processes, and Title VI, as well as DOT’s guidance, suggests that agencies should also make this process accessible to non-English speakers. Providing language access to planning and decision making can include all of the communication strategies used by transit agencies and MPOs in this process. Some communication strategies for public participation will fall into the strategies previously outlined, such as providing interpreters at public meetings and posting translated notices of community or public meetings on Web sites, at stations, in vehicles, in newspapers, or on television or radio. Some agencies also employed more direct tactics to include LEP groups in the planning process. For example, several transit agencies and MPOs we visited mailed out notices of community and public meetings to community and advocacy groups representing LEP persons, although in some cases, these notices were not sent out in languages other than English. In addition, several agencies we visited distributed translated public meeting notices in various establishments throughout the community. For example, the Golden Gate Transit in the Bay Area distributes meeting notices in Spanish at convenience stores, restaurants, and laundromats in predominantly Hispanic neighborhoods. Some transit agencies and MPOs also kept in regular contact with community and advocacy groups representing LEP persons or created specific advisory boards that occasionally influenced language access activities. For example, the Orange County Transportation Authority created a citizen’s advisory committee that pushed for the agency to provide translated notices of service changes. In addition, some agencies reached out directly to LEP communities with regard to the planning and decision-making process. For example, Capital Metro in Austin started an outreach campaign that involved sending teams of staff and volunteers, many of whom were bilingual, into the community to provide information on new transportation projects face-to-face. Capital Metro found that this outreach resulted in greater public support for the agency and in increased ridership. Despite some of these efforts, community group representatives we spoke with were often critical that agencies’ outreach efforts related to planning and decision making were generally not proactive and inclusive of LEP persons. For example, one representative we spoke with told us that attendance at a public meeting on transportation projects in a predominantly Chinese-speaking neighborhood was not well attended by members of that community, and that no Chinese translator was on hand at the meeting. This representative believed that better outreach to that community to encourage community involvement would have led to higher attendance. A representative of another group explained that community meetings are often very difficult to access for Spanish-speaking members of the community, and that the local MPO tends to work with elected officials rather than working more directly with members of the community. In the New Jersey surveys and focus groups of LEP travelers, some LEP groups in New Jersey indicated that a lack of adequate transportation services was the biggest impediment to their mobility. Without access to and involvement with local transit agencies and planning entities, the needs of this community are not likely to be heard by these agencies. Furthermore, failing to provide language access to decision making can lead to complaints of discrimination. FTA has received one complaint that LEP persons were not given adequate access to the planning and decision- making process. The efficacy of the LEP access services provided is largely unknown due to a lack of data. Most transit agencies and MPOs we visited could provide only limited information about the utilization or effectiveness of their language access services. Furthermore, few of the agencies we visited had conducted a formalized assessment of the needs of the LEP populations in their service areas, or had assessed the success of their language access activities in meeting these needs, although DOT’s LEP guidance recommends that they do so. Data limitations were present in analyzing the effects of all types of LEP access services. For example, although some transit agencies print thousands of translated brochures, they do not keep track of how many brochures are placed on buses or in stations. In addition, because many brochures are printed with English and another language in the same booklet, it is impossible to know whether the language accessible section is being utilized. Data on the utilization of bilingual or multilingual telephone operators were also generally not available for the majority of the transit agencies because they do not formally track calls received in languages other than English. In those instances where calls were tracked, they were predominantly in Spanish, and calls in other languages were generally not common. For 1 transit agency, of the 378 calls in languages other than English that were received in 2004, 90 percent of them were in Spanish. For another, just 3 percent of calls were in languages other than English and Spanish. One agency in Los Angeles did receive a relatively large percentage of calls in Russian, Farsi, and Armenian to its language line. For Web sites, data on the utilization of multilingual pages were only available in some instances. Even when tracked, these Web site data were often inconclusive regarding how often the translations were accessed relative to English portions of the sites. Finally, information on the effectiveness of translated signs was not determined by any of the transit agencies or MPOs we visited. Although little effort had been made by the transit agencies and MPOs we visited to closely examine the impact of their LEP activities, a few agencies were considering language issues as part of their more comprehensive assessments of ongoing communication and outreach efforts. For example, the Regional Transportation Authority in Chicago has started a long-term study of the overall communication strategies of all the transit agencies in Chicago, including language access issues. Part of the study’s methodology was for a researcher to ride along with a LEP rider to identify areas where communication was lacking and the rider encountered problems. The study found that language barriers made it difficult to understand changes to schedules or service, or changes in how to navigate through the system. The study is looking at an increased use of pictograms as one potential solution to making access easier for LEP populations. Despite the lack of supporting data, most agencies felt that they were adequately responding to the demand for language access services in their areas. Agency officials believed that because no complaints had been recorded concerning the level of language access provided, and because they generally did not receive many requests for translated materials or interpreters, they were doing a reasonable job of providing such access. Several agency officials did state that there was still room for improvement, and some were considering providing more information in languages other than Spanish. Agency officials also recognized the need for greater outreach efforts in general, especially for ethnic communities that may have language barriers, since turnout at public meetings by these groups is typically low. However, some agency officials told us that agencies may lack the needed staff to regularly conduct proactive community outreach activities. By contrast, community and advocacy groups we met with generally saw several shortcomings in the provision of language access services, sometimes within the larger context of how transit agencies and MPOs communicate with the public in general. In their opinion, a lack of complaints regarding LEP issues did not necessarily mean that transit agencies were doing a satisfactory job, but rather might reflect the fact that many LEP persons were not likely to complain about the provision of language access services, due to cultural differences and wariness about interacting with government agencies. Many community group representatives we spoke with complained of a lack of knowledge in the community about the materials and services that were available, and a lack of materials in languages other than Spanish. Even in areas where transit agencies do provide translated materials, representatives of community groups stated that these materials were often not readily available or easy to locate. In addition, many community groups were unaware of the existence of multilingual telephone lines, or they complained that Spanish- speaking operators were often not available when they called. In addition to questioning the level of service information available to LEP populations, community groups cited concerns about the lack of actual transit services available to certain communities where large LEP populations reside, as well as concerns about a lack of effective involvement of these communities in the planning and decision-making process, as previously discussed in this report. Many representatives we spoke with were unaware of public meetings held by transit agencies and MPOs, and they complained about the lack of ongoing communication with them and the communities they represent. Furthermore, representatives of community groups told us that these agencies rarely used them as a resource or consulted with them on LEP transportation issues. These representatives made several suggestions regarding how language access services could be improved, and which types of activities would likely be most effective in meeting community needs. Several suggestions involved facilitating the inclusion of ethnic communities, including LEP persons, in the planning process. For example, representatives from one group stated that public meetings should have agendas that are clear, specific, and of value to the community, and that these communities should be sought out and included early in the process. Other representatives stated that established community and advocacy groups should be used more effectively as a conduit to the community. Regarding language access services, community group representatives recommended having ticket machines and discount fare information available in other languages so that LEP communities could take advantage of fare discounts. They also said that having spoken announcements in other languages or having bus drivers or other personnel available to communicate in other languages would be highly effective in improving access for LEP persons. The New Jersey survey and focus groups of LEP travelers provided some data on the needs of LEP transit users. Like the community group representatives, some LEP groups in this study reported that inadequate service in their neighborhoods was their chief concern. In terms of travel assistance needed, LEP groups most often cited having a driver or staff person available to assist them in their own language. Reaction was split among LEP travelers on whether multilingual telephone lines were helpful. Some travelers felt they were helpful, and others felt that if the information is prerecorded, it is not effective. While New Jersey Transit does have a multilingual telephone line (not prerecorded), most of the respondents in this study were not aware of the service, which was likely due to a lack of advertising. Finally, LEP groups stated that Web sites were also not particularly helpful because many of the respondents did not have access to the Internet. On the basis of our site visit data, we determined that agencies generally did not believe that the costs for existing language access activities were burdensome. Many transit agencies believed that providing services to LEP populations makes sound business sense. Such agencies recognize that LEP populations represent a significant portion of both their current and their potential ridership. Thus, making services more accessible to LEP persons could increase ridership. For instance, officials at Austin’s Capital Metro told us that their outreach efforts to LEP communities has resulted in increased ridership and greater public support for the agency. While several of the transit agencies we interviewed did not view LEP language access costs as burdensome, the majority of agencies were unable to provide much data on many of the costs associated with their LEP access services. Sometimes these costs were simply not tracked because they were spread out over several departments, or because LEP access activities were not separated from broader costs. The New Jersey survey of transit agencies also found little available data on costs, with only one-third of respondents sharing cost information. Of the respondents to that survey providing cost information, about one-half of them reported annual costs of between $10,000 and $30,000; one-quarter reported costs of under $5,000; and one-quarter reported costs greater than $100,000. Transit agencies and MPOs were able to avoid incurring substantial additional costs by utilizing existing staff. For instance, many agencies stated that rather than contracting out for interpreters at public meetings, they bring in bilingual staff members, use bilingual board members, or rely on community groups or individuals to bring their own interpreters as needed. A similar situation occurs in providing interpreters for customer service telephone lines. While 7 transit agencies have access to some form of a language line with formalized services, many agencies have operators who are bilingual or who will utilize various bilingual staff members throughout their operations to field LEP calls when needed. In terms of printed documents and materials, many of the transit agencies and MPOs we visited have their translations done in-house using bilingual staff members. Often, translation is not part of these staff members’ official responsibilities, but it is done on a voluntary basis at no cost to the agency beyond the use of staff time. Although several transit agencies and MPOs did not report unduly burdensome costs, the cost of providing LEP access has the potential to increase significantly if agencies seek to undertake more comprehensive programs. As we previously discussed, many agencies rely on existing staff to do their translations of materials and to act as interpreters. Utilizing existing staff becomes more difficult when an agency attempts to provide access beyond just one or two languages. In that case, agencies would likely have to contract out for translation and translator services, or have to expend additional time and effort during the hiring process to find qualified candidates fluent in the languages desired. Contracting out for both translation and translator services can be costly. For example, the Capital Metro in Austin estimates that it spends between $10,000 and $15,000 a year for outside translations of materials. The Chicago Transit Authority stated that it spent over $1,100 for interpreters at four public hearings in 2004. Costs will also rise for agencies if they seek to make more comprehensive translated information about their services and programs available through multiple sources. For example, only 1 agency we visited had developed a comprehensively Web site. In addition to any translation costs incurred, developing fully translated Web sites is likely to require modifications to an agency’s Web site architecture, which has the potential to be costly. For instance, the Chicago Transit Authority estimated that the initial costs of translating its Web site into Spanish, Chinese, and Polish could potentially be between $74,000 and $99,000. In addition, the ongoing costs for maintaining the translated sites could also be substantial. Agency officials told us that the capability to update just the Spanish section of a translated Web site on a regular basis would require a new full-time employee and the purchase of additional software, costing an estimated $47,000 to $60,000 annually. In addition, providing language line service that covers multiple languages could raise costs significantly for transit agencies, depending on the usage of the line. Costs for language line services vary, depending on the provider as well as the language being translated, but generally costs per minute range from $1.00 to $1.50, which can add up to significant amounts. For example, the Chicago Regional Transportation Authority’s language line cost about $16,000 in 2004, and Access Services in Los Angeles spent $3,500 in the first 3 months of 2005. In addition, to the extent that agencies seek to provide printed materials in languages other than Spanish, there would be increased typesetting and formatting issues that would give rise to higher costs as well. This is especially true with languages using non-Roman alphabets. For example, officials at the Orange County Transportation Authority estimated that the cost of producing materials in Chinese would be significantly more than for Spanish materials. Finally, in terms of public outreach, a shift to more proactive strategies may lead to higher costs. Transit agencies and MPOs that take the initiative to actively reach out to various community groups and LEP populations would likely need to dedicate a greater amount of staff time and resources. DOT’s LEP guidance provides grantees with a five-step framework for how to provide meaningful access to LEP populations, along with some information on how to implement such a framework; however, officials at the majority of the 20 transit agencies and 7 MPOs we visited were not aware of the LEP guidance. Of the agencies that were aware of the guidance, only 3 had changed their language access activities in response to it, and only 1 transit agency appeared to have fully implemented the five- step framework. DOT and DOJ have also provided other types of assistance on language access services—such as workshops, a DOJ-sponsored interagency Web site, and other resources—but most of the transit agencies and MPOs we visited had not accessed these resources. Officials at transit agencies and MPOs we visited stated that training and technical assistance that is widely available, and specific to language access and how to implement DOT’s LEP guidance, could help them more effectively provide access to LEP populations. DOT’s 2001 LEP guidance outlines five steps funding recipients should take to provide meaningful access for LEP persons, including (1) conducting an assessment of the language groups within their service areas and the language needs of these groups; (2) developing and implementing written plans outlining their strategies for ensuring access to services for LEP populations; (3) making staff aware of the LEP access plan, training them, and providing them with the tools necessary to carry out the plan; (4) ensuring that language access services are actually provided in a consistent manner and that LEP populations are aware of these services; and (5) developing monitoring programs that allow agencies to assess the success of their LEP access programs and to identify needed modifications. The guidance gives some information on how to implement the framework and examples of promising practices. For example, the guidance lists components that a written plan should generally include, although it does not provide examples of such a plan. DOT made its guidance available to its funding recipients through the Federal Register, its Web site, and the DOJ interagency Web site; however, DOT headquarters officials did not distribute the guidance through any other direct method to ensure that grantees were aware of it, such as through a policy memorandum or other outreach to grantees. According to a DOT official, DOT relies on its operating agencies to make grantees aware of the guidance, and, in turn, these operating agencies may rely on regional representatives to make grantees aware of the guidance. In the areas we visited, however, FTA regional representatives had not disseminated the guidance or made grantees in their areas aware of the guidance. Staff turnover in DOT’s agencies, as well as in local transit agencies and MPOs, likely complicate agency awareness of the guidance, since newer employees may not be aware of documents issued years earlier. Although, according to a DOT official, DOT has not done much to reinforce awareness of the guidance, or grantees’ responsibilities under it, since its original publication in the Federal Register in 2001. As a result, the majority of officials we visited during our site visits who are primarily responsible for implementing aspects of DOT’s guidance were not aware of the guidance. Some of the officials we visited who were aware of the guidance had not made significant changes in response to it. Rather than citing DOT’s guidance, officials at the transit agencies and MPOs we visited indicated that they provide language access activities in response to their customer base and demographics, as a result of the Environmental Justice initiative, or as a result of requests from community groups or board members. Officials at many transit agencies and MPOs we visited said they had been providing language access services for many years prior to the executive order and DOT’s guidance. Other officials indicated that they were not sure what their responsibilities were under the guidance. Of the 9 transit agencies and 3 MPOs we visited that were aware of DOT’s guidance, only 2 transit agencies and 1 MPO made changes to their languages access activities as a result. Examples of agency responses to the guidance include the following: The Alameda-Contra Costa Transit District developed an inventory of its language access activities, with several proposals for improving language access services that are now being implemented. The Metropolitan Transportation Commission in the San Francisco Bay Area indicated that, while it had not significantly changed its practices as a result of the guidance, it had increased its efforts. The Chicago Transit Authority formed a committee to examine LEP issues after the release of the guidance in 2001. This committee determined the languages spoken in its service area from Census data and has discussed the idea of implementing a survey to determine what language needs exist. No current plan or timeline for developing or implementing the proposed survey exists. Officials from the California, North Carolina, and Texas state departments of transportation reported that they had begun to monitor their small urban and rural grantees’ LEP activities as a result of the executive order and DOT’s guidance. As a result, some materials have been provided to grantees about their responsibilities under the guidance. Some of the transit agencies and MPOs we visited told us that technical assistance and information would be helpful in implementing DOT’s guidance, and 1 transit agency cited a lack of funds and time to conduct an assessment of language access needs and to provide and evaluate language access activities. For example, an MPO in North Carolina said it would benefit from the ability to easily access practical resources on language access services for LEP persons. In addition, agency officials at a transit agency in California told us that an example of a needs assessment—with estimates of the cost to conduct one and effective ways to outreach to LEP persons—would be very helpful. A DOT official told us that, in anticipation of issuing DOT’s revised guidance, additional training and assistance was being considered within DOT. FTA and FHWA have hosted a few workshops at annual conferences that have provided assistance on how to implement portions of the framework described in the guidance. Presentations held by FTA and FHWA reviewed the LEP executive order, and DOT’s LEP guidance, and provided workshop participants with real-world LEP information, including how to identify LEP populations in their service areas. For example, workshops included the following: Strategies for Complying with FHWA LEP Requirements, was held at the Southern Transportation Civil Rights Conference in Orlando in August 2005. This training identified strategies to ensure that LEP persons have access to programs, services, and information through the application of DOT’s guidance. In addition to this presentation, a “train the trainer” curriculum was developed regarding LEP awareness. Training attendees were provided with a manual with resources on providing language access, which included DOT’s guidance, language identification flash cards, language statistical data, language assistance self-assessment tools, and commonly asked questions and answers. Fair Transportation: Incorporating Equity Concerns into Transit Planning and Operations, presented to the Conference of Minority Transportation Officials by FTA’s Office of Civil Rights, occurred in July 2005. This presentation discussed the changing demographics and growing multicultural nature of the American population and the increase in the number of LEP persons nationwide. FTA staff summarized the requirements of DOT’s LEP guidance, and recommended that transit agencies incorporate attention to the needs of LEP persons into elements of their routine planning and operations, such as their complaint procedures, marketing, customer surveys, and community outreach. LEP: A Lesson in Redefining Public Involvement was given at the 2003 Conference of Minority Transportation Officials National Meeting and Training Conference. This presentation provided information about the LEP executive order and DOT’s guidance, and used real-world examples to illustrate the complications an agency may face as a result of not providing information to LEP populations during the planning process. The presentation also defined compliance with the LEP executive order by listing important components in DOT’s guidance (i.e., a needs assessment, a written language assistance plan, language assistance, and monitoring). How to Identify LEP Populations in Your Locality was given by FHWA at the American Association of State Highway and Transportation Officials’ 2004 Civil Rights Conference. This presentation also provided information on the LEP executive order; DOT’s guidance; and specific information about what resources can be used to identify LEP populations, which is the first step of conducting a needs assessment. For example, the presentation highlighted using Census and state departments of education data to identify the size and location of LEP populations. This presentation is available on FHWA’s Civil Rights Web site. Besides offering workshops, DOT also participates in the Federal Interagency Working Group on Limited-English Proficiency, which provides resources to federal grantees mainly through its Web site, http://www.lep.gov. The resources available on the Web site are generally not specific to transportation, with the exception of DOT’s LEP guidance and a multilingual video on using public transit, “Making Public Transit Work for You,” which was produced by the Contra Costa Commute Alternative Network. The Web site, which is maintained by DOJ, serves as a clearinghouse by providing and linking information; tools; and technical assistance about LEP and language services for federal agencies, recipients of federal funds, users of federal programs and federally assisted programs, and other stakeholders. While most of the information on the Web site is not specifically about transportation, some of it could be applicable to transit agencies. For example, the Web site contains a variety of tools— including a self-assessment—to help local agencies assess their current language services and plan for the provision of additional language assistance to LEP individuals. The Web site also provides an overview of how to develop a language assistance plan, and it contains performance measures, such as a measure of the extent of ongoing feedback from the community, in order to evaluate the effectiveness of LEP activities. In addition, there is a video on the Web site regarding LEP access issues that could be used in training for customer service personnel at transit agencies. FTA’s Title VI Web page provides a link to this Web site. FTA and FHWA have two peer-exchange programs through which local agencies can share innovative or effective practices on various topics that have sometimes included language access. FTA’s peer-exchange program, called Innovative Practices for Increased Ridership, and FTA and FHWA’s collaborative peer-exchange program, called the Transportation Planning Capacity Building Program, allow agencies to easily share information over the Internet. FTA’s Innovative Practices Web site serves as a central information resource for innovative strategies on various topics. Innovative practices are submitted by transit organizations and reviewed by FTA, and these practices are then made available for other transit organizations to search records, review innovations, and potentially implement similar programs. A search of FTA’s Innovative Practices Web site revealed some assistance on language access issues. In one example, a transit agency in Maine created a multilingual brochure that provided basic information about riding its bus service in eight languages, including Spanish, Serbo- Croatian, Russian, Khmer, Somali, Vietnamese, French, and English, and plans to translate the brochure into six more languages, including Farsi, Arabic, Acholi, Swahili, Chinese, and Bulgarian. The transit agency credits this effort with increasing its ridership. The Transportation Planning Capacity Building Program provides resources to local agencies through its Web site, where users can search various topics to find out if any other agency has posted helpful information on those topics. LEP resources are not directly available through an explicit link on this Web site. However, a search of the program’s Web site under Title VI and Environmental Justice issues revealed some assistance on language access. For example, the materials from a workshop called Identifying and Engaging Low Literacy and Limited English Proficiency populations in the Transportation Decision-making Process, which was held in Atlanta in May 2004, was made available to users on the Web site. The workshop refers to the LEP executive order and describes innovative and effective practices that some agencies have employed to improve awareness among communities and transportation planning agencies of the existence of low-literacy and LEP populations in their areas. FTA and FHWA also provide federal grantees with training and technical assistance—through the National Transit Institute (NTI) and the National Highway Institute (NHI), respectively—that address language access issues to some extent in training on other subjects, such as public participation in the transportation planning process. Funded by grants from FTA, NTI provides training, education, and clearinghouse services in support of public transportation. Representatives from NTI identified five training courses in which language issues were discussed in the broader context of other issues. In addition, NTI is developing a course for transit employees that will specifically address cross-cultural communications, including tips for overcoming language barriers, such as speaking slowly, being patient, and not using slang words. NHI also provides training, resource materials, and technical assistance to the transportation community, although, like NTI training, language issues are addressed as they relate to the course content. An official from NHI identified two training courses in which language issues were discussed. An example is NHI’s course called Fundamentals of Title VI/Environmental Justice, in which LEP issues are woven into the course materials. The training gives examples of outreach done by various agencies, which includes providing meeting materials and flyers in Spanish. Another course, entitled Public Involvement Techniques for Transportation Decision Making, describes the importance of including LEP populations in the planning process; provides suggestions on effective ways to reach out to LEP populations, such as through community groups and informal meetings; and outlines ways to continue communication with LEP groups once a connection has been established. For example, the training states that providing translated materials and interpreters at meetings is essential in reaching non-English speakers. NHI and NTI representatives told us that they are working to combine their relevant training courses on public involvement in the transportation planning process into one course. The majority of transit agencies and MPOs we visited did not access the federal resources previously discussed because many officials were unaware that these resources exist. Only a few agencies we visited had reported attending workshops held at annual conferences on language access issues, and no agency we met with had reported accessing information available through http://www.lep.gov. Furthermore, statistics on the number of Internet users that accessed LEP resources on the Web- based peer-exchange programs indicate that these resources are not accessed often in comparison to other resources on those Web sites. A few transit agencies we visited were aware of or had accessed the NTI training entitled Public Involvement in Transportation Decision-Making, which includes a section on ensuring that nontraditional participants—that is, minority, low-income, and LEP populations—are included in the public involvement process associated with transportation planning. Language access activities of transit agencies and MPOs are monitored through three review processes—FTA’s Title VI compliance reviews, FTA’s triennial reviews, and planning certification reviews conducted jointly by FTA and FHWA (described in table 1). However, these reviews do not fully take into account Executive Order 13166 or DOT’s LEP guidance, and the criteria for finding a deficiency with regard to providing language access are inconsistent. The Title VI compliance review—an in-depth review of a limited number of transit agencies, MPOs, and state DOTs—does not assess language access activities using the LEP guidance, but rather assesses them using guidelines in an FTA circular, which asks agencies to describe the language access they provide. However, the circular does not provide agencies with a framework, and does not have much specificity regarding what agencies should provide in terms of language access. FTA officials told us that the circular is used for the compliance review because it is a requirement for agencies, while agencies are not required to implement all aspects of DOT’s LEP guidance. The officials further stated that they have considered including more aspects of DOT’s guidance in the compliance review. We reviewed Title VI compliance reviews completed between 2002 and 2004 and found that the scope of these reviews of language access activities varied, and may not assess local agencies’ language activities across the entire breadth of communication strategies previously outlined in this report. For example, in one review, an agency was found deficient because it did not have safety and emergency information translated, yet in other reviews it was unclear whether safety and emergency information was included in the scope of the review. Furthermore, the scope of the multilingual communications portions of the Title VI compliance reviews has varied on the basis of the primary objective of the endeavor. Some of these reviews considered only the extent to which language assistance was provided to persons wanting to involve themselves in the transit system’s planning and decision-making processes because the scope of the reviews focused solely on these processes. Other reviews evaluated only the extent to which language assistance was provided to persons wanting to use the transit system. Table 2 provides examples of deficiency findings related to language access from these Title VI reviews. In March of 2003, FTA’s Office of Civil Rights conducted a pilot Title VI compliance review of the Brownsville Urban System in Texas, specifically looking at the extent to which the agency had implemented DOT’s LEP guidance. This pilot was initiated as part of a refocusing of Title VI compliance reviews on more specific issues within Title VI, including multilingual communications, fare increases, service changes, and equitable allocation of resources. Brownsville was selected by FTA’s Office of Civil Rights for the pilot assessment for multilingual communication because of its large Spanish-speaking community. The assessment guidance used in the pilot incorporated sections of DOT’s guidance in addition to the multilingual facilities section of the FTA circular used in other Title VI compliance reviews. The assessment focused on whether the Brownsville system had ensured meaningful access to LEP persons by assessing 11 different aspects of providing greater access to LEP persons. For example, the review focused on whether the agency had a needs assessment and a written language assistance plan; the agency’s provision of language services (e.g., oral interpretation; written translations; and alternative, nonverbal methods); and its provision of language access to its grievance or complaint procedures. Brownsville was found deficient in 5 of the 11 areas, as shown in table 3. FTA’s Office of Civil Rights has also recently developed an initiative that focuses on fare and service changes, but FTA’s advice to agencies related to this initiative has not always been consistent. While this initiative is based on the Executive Order on Environmental Justice, it does include an LEP component. In 2004, FTA developed and disseminated a self-assessment (also posted on the FTA’s Title VI Web site) to about 20 transit agencies considering fare and service changes. This assessment included questions about the public involvement process and asked the transit agency whether it believed outreach to the LEP population was warranted, and, if so, what steps the transit agency had taken or was planning to take to inform its LEP population about the service or fare changes and to offer this population the chance to comment on the changes. The majority of the agencies that returned this self-assessment reported that they had taken steps to reach out to their LEP populations using methods similar to those previously noted in this report, such as posting information about the upcoming fare increases in multiple languages in vehicles and stations, advertising the changes in other-language newspapers, and including interpreters at public meetings established to discuss the changes. Several of the transit agencies responding to this initiative stated that they had not engaged in LEP outreach because the number and proportion of LEP persons in their service areas were very small (i.e., less than 1 percent). For 1 agency, FTA encouraged the agency to conduct a further assessment of the LEP population, even though the agency reported that only 119 residents in its service area (less than ½ of 1 percent) did not speak English well. Yet, in another location, where the agency reported that only ½ of 1 percent of the service area population was LEP, FTA encouraged the transit agency to monitor demographic trends to determine whether limited English proficiency may become more relevant in the future, rather than conduct a further assessment. Another of the review processes, the triennial review, looks at whether transit agencies that receive Urbanized Area Formula Grants have complied with statutory and administrative requirements in 23 areas, one of which is Title VI. Because this review covers a wide variety of activities and federal requirements, it is not as in-depth with regard to Title VI as Title VI compliance reviews. However, the triennial review serves as the basic review of FTA’s oversight program. Under the Title VI section of the triennial review, specific questions make reference to DOT’s LEP guidance: “Has the grantee assessed and addressed the ability of persons with limited English proficiency to use transit services? Are schedules and other public information provided in languages other than English? If yes, what other languages are provided?” In the triennial review, the grantee is found deficient only if a complaint has been made and the grantee has not conducted an assessment of the population and the need for LEP materials. However, several community and advocacy groups we met with indicated that there may be language barriers to making a complaint, and, as we previously discussed, there may be different cultural or social norms that preclude LEP persons from making complaints (i.e., some persons may feel that it is not their place to question the government, or may feel uncomfortable doing so). Because a deficiency is found only if a complaint has been made and the agency has not conducted an assessment, findings of deficiencies are rare; although our case studies and the New Jersey survey of transit agencies suggest that most agencies have not conducted a language needs assessment. We reviewed 34 triennial reviews conducted in fiscal year 2005 that identified one or more deficiencies in the area of Title VI and found only one deficiency related to LEP. In 2005, the Fayetteville Area System of Transit was found deficient for not conducting an assessment of the extent to which there are LEP persons in its service area. Within 90 days, the agency was to provide FTA with documentation that it had conducted an LEP assessment and with information on the steps it would take to address any needs identified. The third of the three review processes that monitor language access activities is the planning certification review, which looks at how well state and regional planning processes comply with DOT planning regulations. This review is conducted jointly by FTA and FHWA and is also not as in- depth with regard to Title VI as Title VI compliance reviews. One section of the review guidelines is directed at LEP issues with regard to public participation in the planning process, but the review does not incorporate the LEP guidance. The section states that agencies should “if necessary, make available communications for the hearing impaired and provide sign and foreign language interpreters.” It is not clear what constitutes a deficiency in these reviews, and during the past 2 years, there have been no deficiency findings regarding language. In addition to the review processes, FTA investigates Title VI complaints filed by the public alleging national origin discrimination against LEP persons. These investigations focus on whether a recipient has taken reasonable steps to provide meaningful access to LEP persons. However, FTA has received only one complaint related to language access to date. The complaint—which was made by West Harlem Environmental Action, Inc., against New York City Transit in November 2000—stated that no opportunity had been given for community groups to comment on New York City Transit’s capital plan to construct additional bus parking facilities next to an existing bus depot. The complaint further stated that the capital plan was not published in Spanish and no monolingual Spanish-speaking resident of northern Manhattan was afforded the opportunity to comment on the capital plan. New York City Transit noted that since Executive Order 13166 and the LEP guidance were issued after the development of its 2000- 2004 capital program, there was no requirement to issue the plan in any language other than English at that time. FTA responded that although the executive order and the LEP guidance were issued subsequent to the issuance of the plan, New York City Transit should have provided language access under its 1988 Circular on Multilingual Facilities. In resolving the complaint, FTA requested (1) copies of Spanish translations of public hearing notices and summaries of the capital program and (2) a report on what steps New York City Transit had taken to involve the public, including minority, low-income, and LEP populations, in its 2005-2009 capital planning process. FTA closed its investigation of this complaint in letters of finding transmitted in January 2005. Transit agencies and MPOs across the country are providing a wide variety of language access services. Determining and providing reasonable and effective language access to transportation services, however, is not a clear-cut matter. To do so, an agency must have a strong understanding of the size and location of the LEP community in its area as well as the information needs of this community, although such assessments are rarely done. The agency must then deal with a whole host of issues, such as determining which language access services to provide and in what quantity, how translations are to be accomplished, where such materials or services are best distributed, and how such materials and services are best publicized to the LEP communities. For agencies in very diverse areas, the challenges grow exponentially. Specifically, some of the questions they may need to address are as follows: How many languages should materials and services be translated into? Is there a threshold with regard to the size or proportion of different language groups before translations should be provided? Will translated signs be too complex for transit users to effectively use? Will the costs of translations, telephone, and Web services be burdensome, given the relatively light use some of these services may receive? Furthermore, providing language access is just one part of a larger communication strategy for these agencies, which can include determining how to provide useful information in English, how to communicate with the hearing or sight impaired, or how to deal with communication to persons with cognitive disabilities. One clear need in all of these instances is for agencies to outreach to these various communities and work in partnership to determine and meet a variety of information needs. DOT’s LEP guidance, and many of the available federal resources, can provide some assistance to transit agencies and MPOs when facing these challenges and making decisions about the level of language access to provide; however, the absence of local agency awareness of the existence of these resources limits their usefulness. In addition, for some transit agencies and MPOs, the available assistance was not effective in helping them answer some of the difficult questions previously outlined, because the assistance does not provide much information on what a good language and needs assessment contains, or how one is done. It also does not provide templates or examples of effective language access plans, nor does it provide much help in determining how to monitor and judge the effectiveness of agencies’ language access activities. Given the lack of data available on the effectiveness of services, the availability of such assistance takes on greater importance. More direct dissemination of the LEP guidance and available assistance, and the development of additional assistance related to conducting assessments, developing plans, and monitoring the effectiveness of language access activities could help connect local agencies with information and resources that may help them improve access to their services for LEP persons. While complaints concerning language access are rare, transit agencies’ and MPOs’ language access efforts are often perceived by community groups to be lacking in certain areas, particularly with regard to the inclusion of such communities in decision-making processes, thus opening up the potential for further complaints against these agencies for not providing reasonable language access. At present, however, monitoring and oversight activities conducted by FTA and, to a lesser extent, FHWA, are not likely to remedy perceived gaps in the provision of language access, due to the inconsistencies in scope and criteria for what constitutes a deficiency. For example, one of the chief complaints of community groups is the lack of involvement of LEP communities or the community groups that represent them, in decision-making processes; however, planning certification reviews do not look at involvement per se, but rather they focus on whether interpreters were provided at public meetings “if necessary.” Furthermore, FTA’s pilot review of language access, which used DOT’s LEP guidance, revealed several deficiencies that would not have been found under current review processes, and these deficiencies can commonly be found across countless numbers of agencies. It is important, though, to consider that findings of deficiency, such as those found under the pilot review, do not necessarily indicate that an agency has been discriminatory. Nonetheless, further incorporation of key aspects of DOT’s LEP guidance in existing review processes and consistent criteria for what constitutes a deficiency could help transit agencies and MPOs understand their responsibilities under the executive order and DOT’s LEP guidance and lead to improved services for LEP persons. To improve awareness and understanding of DOT funding recipients’ responsibilities to provide language access services, we recommend that, upon final issuance of DOT’s LEP guidance, the Secretary of the Department of Transportation ensure that the guidance is distributed to all DOT funding recipients through a policy memorandum or other direct methods and direct regional personnel to make grantees in their areas fully aware of the existence of the guidance, and of grantee responsibilities under the guidance. To enhance and improve transit agencies’ and MPOs’ language access activities, we recommend that the Secretary, when issuing DOT’s revised LEP guidance, take the following two actions: Provide additional technical assistance, such as templates or examples, to aid these agencies in developing assessments of the size, location, and needs of the LEP population; plans for implementing language access services; and evaluations of the effectiveness of agencies’ language access services. Publicize the availability of existing federal resources on LEP issues, including workshops, http://www.lep.gov, peer-exchange programs, and available training to transit agencies and MPOs, and make these resources easily accessible through an explicit link to LEP Assistance on the Transportation Planning Capacity Building Program’s Web site. To ensure that transit agencies and MPOs understand their responsibilities to provide language access, and to ensure that they are providing adequate language access to their services and their transportation planning and decision-making processes, we recommend that the Secretary more fully incorporate the revised LEP guidance into current review processes by taking the following three actions: Include questions on whether agencies have conducted assessments, have language access plans, and have evaluation and monitoring mechanisms in place in Title VI compliance reviews and triennial reviews. Include more specific questions regarding language access to the planning process and involvement of LEP communities in planning certification reviews. Establish consistent norms for what constitutes a deficiency in the provision of language access across and within these review processes, ensuring that what constitutes a deficiency could directly lead to lesser service for LEP persons or complaints against the agency. We obtained comments on a draft of this report from DOT officials who generally agreed with the findings and recommendations in the report. These officials also provided technical clarifications, which we incorporated in the report as appropriate. In particular, the officials said that DOT is already planning to take actions to address some of our recommendations, including ensuring that its revised LEP guidance is fully and appropriately distributed, and enhancing its training and technical assistance to grantees. We also provided DOJ with an opportunity to comment on segments of the report that pertain to DOJ processes and policies. DOJ provided technical clarifications, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretary and other appropriate officials of the Department of Transportation. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. In addition, translated summaries of this report in Spanish, Chinese, Vietnamese, and Korean will be available at no charge on the GAO Web site at http://www.gao.gov/special.pubs/translations. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or at siggerudk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine the types of language access services that transit agencies and metropolitan planning organizations (MPO) provide to limited English- proficiency (LEP) populations, we visited seven metropolitan statistical areas in Arkansas, California, Illinois, North Carolina, and Texas. We used U.S. Census Bureau data to select site visit locations, on the basis of the size and proportion of the LEP population, the number of languages spoken, the growth of the LEP population, and the extent of public transit use, to capture a variety of different circumstances agencies may face in providing language access services. We eliminated from our site visits areas that had recently had in-depth reviews by the Federal Transit Administration (FTA), as well as agencies that had been highlighted in a recent report for best practices in providing LEP access, to broaden the limited amount of research and data available in this area. Notable areas eliminated from our potential site visits for these reasons included New York, New York; Washington, D.C.; Portland, Oregon; and Seattle, Washington. The relevant statistics for the seven areas we visited are presented in table 4. We conducted semistructured interviews with officials from 20 transit agencies and 7 MPOs in these locations who were responsible for some facet of providing language access services. We interviewed officials from various departments, including operations, marketing, public affairs, community relations, training, civil rights, and planning. At smaller agencies, we interviewed the general managers as well as other agency officials. We chose agencies in each location according to their size and characteristics. For example, we interviewed the largest transit agency in each location, and where there were several transit agencies operating, we then interviewed the next largest agencies. In certain locations, such as the Southern California area and the San Francisco Bay Area, we were unable to interview all of the agencies in the area due to the large number of transit agencies. In these areas, we chose additional agencies on the basis of different operating characteristics. For example, in Los Angeles, California, we chose to interview the major provider of specialized transit services for persons with disabilities, whereas, in the San Francisco Bay Area, we chose a suburban bus system to complement the urban systems we were obtaining information on. We also interviewed officials from the major MPOs in areas we visited. In some cases, an MPO also may provide some level of transportation service. For example, the Metropolitan Transportation Commission in the San Francisco Bay Area operates the region’s 511 transportation information lines. In these instances, we did not count such agencies as transit agencies, but we included the services they provide in the appropriate section of this report. We structured the agency interviews on the basis of the elements of the Department of Transportation’s (DOT) LEP guidance and the findings of previous research and surveys conducted of the language access activities of transit agencies. During our interviews, we discussed the types of language access activities provided in terms of day-to-day transportation services and in the planning and decision-making process; we also discussed the costs and effects of these services. We also reviewed documents and other information in support of the language access services provided by transit agencies and MPOs. We also interviewed representatives from 16 community and advocacy groups in the areas we visited as well as representatives from national advocacy groups, such as the National Council of La Raza, the Center for Community Change, and the National Asian Pacific American Legal Consortium. We chose groups in the locations we visited on the basis of recommendations from these national groups, FTA regional officials, transit agency officials, and our own research into the transportation issues in these areas. We structured these interviews in order to understand the perspectives of these community and advocacy groups with regard to how transit agencies and MPOs in the areas are providing access to their services to the communities these groups serve, and the effects of these services on meeting the needs of LEP communities. The agencies and groups we included in our interviews are listed in table 5. We also conducted interviews with officials within the Texas, California, and North Carolina departments of transportation and conducted additional Internet research of state departments of transportation, to determine how these agencies were involved in providing or monitoring language access. Furthermore, we requested that the Community Transportation Association of America, which operates a list-serve of Job Access and Reverse Commute grantees, send a query requesting that any grantees involved in providing language access services under those grants provide information on the types of services they offer. We received two responses from this query. We complemented these case studies and interviews with findings from a survey of transit agencies across the country and surveys and focus groups with LEP persons in New Jersey conducted for the New Jersey Department of Transportation. We reviewed the methodology of this study and found it to be sufficiently reliable for the purposes of our report. However, the results of the surveys and focus groups reported in this study cannot be generalized to the full universe of transit agencies or LEP persons. Rather, we used the findings in this study to provide additional information on the types of strategies that agencies use as well as the types of challenges that LEP populations face. We synthesized the information we collected from the site visits, structured interviews, and the New Jersey study. We analyzed this information to identify major themes, commonalities, and differences in the level of language access provided by transit agencies and MPOs. We observed that almost all transit agencies and MPOs we visited provided some level of language access services, although levels varied across agencies and locations. Because these findings are based on a nonprobability sample of case studies and a survey of 32 transit agencies, they cannot be generalized to the full universe of transit agencies or MPOs across the country. These case studies are meant to highlight the variety of different strategies agencies may use to improve communication with LEP persons, as well as key themes that emerge under various circumstances. To understand how DOT assists local agencies in providing language access services, we interviewed officials at the Offices of Civil Rights in FTA and the Federal Highway Administration (FHWA), representatives from the National Transit Institute and the National Highway Institute, and DOT regional officials. During our interviews, we identified and discussed various resources available that may include information on language access activities, including training curricula and workshops. We interviewed officials from FHWA offices in California, Maryland, and New Jersey regarding some of their LEP activities, such as hosting workshops at annual conferences and other assistance they have provided grantees. We reviewed Executive Order 13166, the Department of Justice’s (DOJ) and DOT’s draft LEP guidance, other federal laws and regulations, and research related to providing access to services to LEP populations. We requested copies of identified trainings and reviewed them. We also identified and reviewed other various DOT resources and other federal resources to determine whether language access issues were addressed, including http://www.lep.gov and peer-exchange programs maintained by FTA and FHWA. To understand the extent to which local agencies are accessing DOT’s resources, we discussed with local agency officials their awareness and implementation of DOT’s LEP guidance. We also discussed with these officials whether the agency has accessed DOT’s resources and, if so, had the resources been helpful in the provision of language access activities. In addition, we reviewed Web statistics for materials available on the Internet for additional information on how often those materials were accessed. To document how FTA and FHWA monitor transit agencies’ and MPOs’ provision of language access services for LEP populations, we interviewed officials from the FTA Office of Civil Rights; the FTA Office of Program Management; and FHWA’s Office of Planning, Environment and Realty. We also interviewed FTA regional representatives from Arkansas, California, Illinois, North Carolina, and Texas. We reviewed oversight documents pertaining to Title VI compliance reviews, triennial reviews, and planning certification reviews to determine how language access is considered by these reviews (i.e., specific questions regarding language access activities) and to what degree these reviews incorporate DOT’s LEP guidance. In addition, we collected available data on any findings from these reviews to analyze the extent to which norms have been developed for reviewers to determine whether deficiencies are found and reported. Furthermore, we reviewed the status and outcomes of LEP complaints. We conducted our work from February 2005 through October 2005 in accordance with generally accepted government auditing standards. Executive Order 13166 Improving Access to Services for Persons with Limited English Proficiency: Executive Order 13166 was signed by President Clinton in 2000. It clarifies federal agencies and their grant recipients’ responsibilities under Title VI, to make their services accessible to LEP populations. http://usdoj.gov/crt/cor/Pubs/eolep.htm DOT Guidance to Recipients on Special Language Services to Limited English Proficient (LEP) Beneficiaries: DOT’s guidance was issued in 2001. It discusses strategies for providing services to LEP persons and outlines a five-step framework to an effective language access program as well as innovative practices. http://usdoj.gov/crt/cor/lep/dotlep.htm Federal Interagency Working Group on Limited-English Proficiency: The http://www.lep.gov Web site, maintained by DOJ, serves as a clearinghouse, providing and linking information, tools, and technical assistance regarding LEP and language services for federal agencies, recipients of federal funds, and users of federal programs and federally assisted programs. The Web site includes a self-assessment tool and an overview of how to develop a language assistance plan with performance measures. There is also a video available from the Web site on LEP access issues that could be used in training for customer service personnel at transit agencies. http://www.lep.gov FTA Title VI Web site: FTA’s Title VI Web site provides information and resources on Title VI, including links to Executive Order 13166, DOT’s LEP guidance, and http://www.lep.gov. http://fta.dot.gov/16241_ENG_HTML.htm FHWA Office of Civil Rights Web site: FHWA’s Office of Civil Rights Web site provides links to Title VI, Executive Order 13166, and DOT’s LEP guidance. http://fhwa.dot.gov/civilrights/nondis.htm Workshop entitled How to Identify Limited English Proficient (LEP) Populations in Your Locality: This workshop was given by FHWA at the American Association of State Highway and Transportation Official’s 2004 Civil Rights Conference. The workshop provides information on the LEP executive order, DOT’s LEP guidance, and specific information about what resources can be used to identify LEP populations. http://fhwa.dot.gov/civilrights/confworkshops04.htm FTA’s Innovative Practices to Increase Ridership: The Web site serves as a central information resource on innovative strategies on various topics. Innovative practices are submitted by transit organizations, reviewed by FTA, and are then made available for other transit organizations to search records, review innovations, and potentially implement similar programs. Innovative practices regarding language access services are available. http://ftawebprod.fta.dot.gov/bpir/ FTA and FHWA’s Transportation Planning Capacity Building Program: Users can search various topics to find out if like sized or any type of agency has posted any helpful information on those topics. Information regarding language access services is available. http://planning.dot.gov/ National Transit Institute course entitled Public Involvement in Transportation Decision-Making: This course includes is a section on ensuring that nontraditional participants, that is, minority, low-income, and LEP populations are included in the public involvement process that is associated with transportation planning. http://ntionline.com/ National Highway Institute course entitled Fundamentals of Title VI/Environmental Justice and Public Involvement in the Transportation Decision-Making Process: These courses include a discussion on language access issues in the planning process. http://nhi.fhwa.dot.gov/ Caltrans Title VI Web site: Caltrans’ Title VI Web site includes information and resources on Title VI and links to FHWA’s Office of Civil Rights training resources, the Web site for the Civil Rights Division of DOJ, and lep.gov. In addition, there are three training videos available for free, one specifically on the language assistance for LEP persons. http://dot.ca.gov/hq/bep/title_vi/t6_index.htm Mobility Information Needs of Limited English Proficiency (LEP) In addition to the individual named above, Rita Grieco, Assistant Director; Michelle Dresben; Edda Emmanuelli-Perez; Harriet Ganson; Joel Grossman; Diane Harper; Charlotte Kea; Grant Mallie; John M. Miller; Sara Ann Moessbauer; Marisela Perez; Ryan Vaughan; Andrew Von Ah; Mindi Weisenbloom; and Alwynne Wilbur made key contributions to this report.
More than 10 million people in the United States are of limited English proficiency (LEP), in that they do not speak English at all or do not speak English well. These persons tend to rely on public transit more than English speakers. Executive Order 13166 directs federal agencies to develop guidance for their grantees on making their services accessible to LEP persons. The Department of Transportation (DOT) issued its guidance in 2001, with revised guidance pending issuance. This report reviews (1) the language access services transit agencies and metropolitan planning organizations have provided, and the effects and costs of these services; (2) how DOT assists its grantees in providing language access services; and (3) how DOT monitors its grantees' provision of these services. Transit agencies and metropolitan planning organizations provide a variety of language access services, predominantly in Spanish, but the effects and costs of these services are largely unknown. Types of services provided included, among other things, translated brochures and signs; multilingual telephone lines; bilingual drivers; and interpreters at public meetings. However, few agencies we visited had conducted an assessment of the language needs in their service areas, or had conducted an evaluation of their language access efforts. As a result, it is unclear whether agencies' efforts are comprehensive enough to meet the needs of LEP persons, and community groups in the areas we visited saw important gaps in agencies' services. In addition, although those costs are largely unknown, several agencies saw providing language access as a cost of doing business, not as an additional cost. However, if efforts were to be expanded to include additional services or languages, agency officials told us that costs could become prohibitive. DOT assists grantees in providing language access through its guidance and other activities, but DOT has made limited efforts to ensure that grantees are aware of the available assistance, which was not often accessed by the agencies we visited. This assistance includes DOT's guidance--which provides a five-step framework for how to provide meaningful language access--as well as workshops and peer-exchange programs that include language access practices, and training courses that touch on language issues. DOT also participates in a federal LEP clearinghouse, www.lep.gov . However, few agencies we visited had accessed these resources. Several local officials stated that easily accessible training and assistance specific to language access and examples of how to implement DOT's guidance could help them more effectively provide access to LEP populations. Transit agencies' and metropolitan planning organizations' provision of language access services are monitored through in-depth civil rights compliance reviews and two broader reviews--triennial reviews of transit agencies and planning certification reviews. However, these reviews do not have consistent criteria for determining whether an agency is deficient in providing such services. Furthermore, these reviews do not fully reflect Executive Order 13166 or DOT's guidance. Without thorough and consistent monitoring that takes into account the guidance, local agencies' language access activities will likely remain varied and inconsistent.
Private, public, and nonprofit employers can use information from criminal history records for non-criminal-justice purposes, such as screening an individual’s suitability for working with children, the elderly, or other vulnerable populations. States primarily create and maintain criminal history records, but the FBI facilitates the interstate sharing of these records for criminal and non-criminal-justice purposes. Specifically, state central record repositories collect criminal history information from law enforcement agencies, courts, and other agencies throughout the state and submit records to the FBI. For example, state repositories collect arrest records from local police departments and disposition records from prosecutors or courts. The FBI maintains a fingerprint-based criminal history record repository called the Next Generation Identification (NGI) System (previously the Integrated Automated Fingerprint Identification System). The NGI System contains records from all states and territories, as well as from federal and some international criminal justice agencies. The FBI’s Interstate Identification Index provides for the decentralized interstate exchange of criminal history record information for authorized criminal and non- criminal-justice purposes and functions as a part of the NGI System. In general, states conduct FBI criminal history record checks by searching an applicant’s fingerprints against records in the NGI System (see fig. 1). In general, the FBI provides the results of a FBI criminal history record check to a designated agency—such as a state department of health and human services or board of occupational licensing—through a criminal history summary. This summary—often referred to as a criminal history record, or rap sheet—includes the name of the agency that submitted the criminal record to the FBI; the date of the arrest; the arrest charge; and the disposition of the arrest, if known, to the FBI. Federal laws that require or authorize states to conduct FBI criminal history record checks for non-criminal-justice purposes—including employment and licensing—cover a wide range of industries, such as those that serve vulnerable populations. These federal laws may authorize states to conduct FBI checks using just the authority of the federal law without requiring a related state statute.addresses the states’ use of three federal laws, as shown in table 1. In addition to federal laws, states may pass statutes that the Attorney General approves pursuant to Public Law 92-544 that require or authorize employers or organizations to request FBI criminal record checks for applicants seeking employment or licensing in their state. For example, states can require FBI checks for non-criminal-justice purposes in areas regulated by the state, such as civil servants and nursing home workers. States can also pass laws to implement federal laws, which can include, for example, additional provisions on the types of criminal activities that would disqualify an applicant from employment or licensing. All state laws related to Public Law 92-544 have to be approved by the Attorney General. According to FBI officials, as of 2014, states had passed a total of about 2,800 laws that require or authorize FBI criminal history record checks, which include checks for employment or licensing purposes. DOJ, states, and others have emphasized the importance of having complete records when conducting FBI checks—records that contain the arrest charge and the disposition of the arrest (e.g., conviction or acquittal)—since incomplete records can lead to delays in completing checks and have adverse impacts on applicants. In 1995, DOJ established the National Criminal History Improvement Program (NCHIP) to enhance the quality, completeness, and accessibility of criminal history record information maintained by the states. All 50 states, the District of Columbia, and U.S. territories have received grant awards. The FBI also helps to ensure the integrity of state-level criminal record systems through periodic audits. Employers can also obtain background information—including criminal record information—from private sector companies that compile and sell information that they may obtain from state courts or other public sources. These companies are classified as consumer reporting agencies under the Fair Credit Reporting Act (FCRA). This act contains provisions that are intended to require these agencies to adopt reasonable procedures for using consumer credit, personnel, insurance, and other information in a manner that is fair and equitable to the consumer, with regard to the confidentiality, accuracy, relevancy, and proper utilization of such information. At the federal level, the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB) regulate private background companies and employers that conduct background checks that may contain criminal record information. In addition, the Equal Employment Opportunity Commission (EEOC) regulates and oversees employers’ use of criminal record information provided by private background companies under Title VII of the Civil Rights Act of 1964. Most states that responded to our nationwide survey reported that they conduct FBI record checks for individuals working with vulnerable populations and other employment sectors we reviewed. States not conducting such checks reported lacking designated state agencies to review the check results, among other challenges. The Attorney General has proposed expanding FBI record checks to employers and other third parties, but also noted that any expansion should consider concerns about securing data and protecting personal information. The National Child Protection Act (NCPA), as amended, authorizes states to have procedures that require qualified entities designated by the state to contact an authorized state agency to request an FBI criminal background check. This check is for the purpose of determining whether a person has been convicted of a crime that bears upon the person’s fitness to have responsibility for the safety and well-being of children, the elderly, or individuals with disabilities. Our survey results show that 45 of 48 respondents conduct FBI record checks for individuals seeking jobs or licenses to be teachers in schools—positions that are typically regulated by states. The largest gap in FBI record checks was for volunteers serving the elderly or individuals with disabilities, where 36 of 47 respondents reported conducting such checks, but 11 of 47 respondents did not, as shown in figure 2. The primary reasons states reported not conducting FBI criminal history record checks for employment or volunteer positions covered by the NCPA were the lack of a designated state agency to review the FBI record check results or the states did not have licensing or regulatory requirements to check volunteers. One survey respondent noted that, in some cases, state legislatures do not support expanding the availability of background checks to certain classes of employees, despite the existence of federal laws that seek to encourage such checks. Recognizing concerns about the background check process available to volunteer organizations, the Prosecutorial Remedies and Other Tools to End the Exploitation of Children Today Act of 2003 established the Child Safety Pilot Program. The act required the Attorney General to establish an 18-month program that would provide for the FBI to conduct 100,000 criminal history record check requests from certain youth-serving organizations, such as the Boys and Girls Clubs of America. Under the pilot, the FBI provided the results of a check to the National Center for Missing and Exploited Children—rather than through a state agency— which made suitability determinations and conveyed the decision to the organization that made the request. According to officials from the national center, of the approximately 105,000 FBI checks that the organization conducted over the 8-year pilot, about 6,500 (6.2 percent) of all applicants had a criminal record that disqualified them from working with children. Officials who represent volunteer organizations said that they plan to continue to pursue legislation in upcoming congressional sessions that would provide for certain youth-serving organizations to use information from FBI record checks to screen applicants. Some states have also developed programs that allow volunteer organizations to obtain information from FBI criminal record checks. For example, according to officials from the Florida Department of Law Enforcement, Florida has established a program—which the FBI approved—that authorizes certain volunteer organizations to receive the results of FBI record checks from the department and determine an applicant’s suitability for employment, rather than relying on a Florida agency to adjudicate the results on the organization’s behalf. Senior officials from the Florida Department of Law Enforcement noted that the state requires volunteer organizations to sign a user agreement before gaining access to FBI-maintained criminal records, a requirement that is intended to help ensure that the organizations properly use and safeguard the records. The Florida officials also said that the department can audit these entities to ensure compliance with the agreement. In general, the Edward M. Kennedy Serve America Act requires, with limited exceptions, that entities conduct FBI criminal history checks for certain individuals working with vulnerable populations. These individuals serve in positions that provide the individuals with a living allowance, stipend, national service educational award, or salary through a program receiving assistance under national service laws. Among other things, these individuals can tutor children in reading, run after- school programs, provide health information to a vulnerable population, and conduct neighborhood watch programs. Our survey results show that 30 of 44 respondents conduct FBI record checks for national service program grant recipients and 14 of 44 respondents did not conduct such checks. Of the 14 respondents that did not conduct FBI checks, 12 reported not having procedures or agencies in place to review the results of checks for national service program grant recipients, 6 reported lacking sufficient resources to review check results, and 5 reported lacking a state licensing or regulatory need to conduct such checks. According to a senior official from the Corporation for National and Community Service (CNCS)—the federal entity that administers programs established under national service laws—CNCS has received hundreds of requests from national service program grantees for an exemption from the FBI record check requirement and for approval to use an alternative screening procedure, such as the ability to use a substantially equivalent process. The officials noted that a subset of these requests are from organizations that seek an exemption to the FBI record check requirement because of the difficulties they have encountered in obtaining such checks. Survey respondents could provide more than one reason for not conducting checks. conducting state and national fingerprint-based criminal history record checks for national service program participants, in part to allow states to make their own suitability determinations. In October 2014, CNCS officials stated that in light of the challenges national service programs have faced in obtaining FBI record checks, CNCS is assessing the costs and benefits of acting as a national clearinghouse for such checks if no option is available in the organization’s own state. CNCS expects to make a decision in 2015 regarding whether acting as a clearinghouse is feasible. In general, the Private Security Officer Employment Authorization Act (PSOEAA) of 2004 and its associated regulations permit authorized employers to submit the fingerprints of an employee or applicant for employment as a private security officer to a state repository of a participating state for purposes of conducting an FBI criminal history record check. Congress found that employment of private security officers in the United States was growing rapidly; private security officers function as an adjunct to, but not replacement for, public law enforcement by helping to reduce and prevent crime; and such officers protect individuals, property, and proprietary information. Private security officers provide protection to banks, hospitals, manufacturing facilities, nuclear power plants, airports, and schools, among other operations. Our survey results show that 37 of 43 respondents conduct FBI record checks for private licensed 7 of 43 respondents conduct FBI record checks for private unlicensed security officers. The primary reasons why states reported not conducting FBI record checks for private security officers were because the states did not license or regulate security officers or because the states did not have a designated state agency to adjudicate the results of the checks. Under certain circumstances, PSOEAA regulations also generally permit authorized employers to submit the fingerprints to a state other than the state in which the employee or applicant would be working for purposes of an FBI criminal history record check. The chair of the Compact Council informed us about 1 state (Minnesota) that was conducting FBI checks for employers located in other states. According to a senior official from Minnesota’s Bureau of Criminal Apprehension, the bureau did not face any challenges in conducting such checks. The official noted, however, that only one employer had requested Minnesota’s help and the employer asked Minnesota to conduct FBI checks for employees in 11 other states where the employer operated. According to an executive-level official from the National Association of Security Companies—the nation’s largest contract security trade association—requiring that a state agency be involved in conducting FBI checks is a barrier for employers. The official explained that for a state agency to set up an FBI background check program, the state may need legislative authority, appropriations, and employees with expertise in interpreting criminal records, among other things. The official added that PSOEAA and federal requirements only allow states to provide employers with a determination as to whether or not an applicant failed to meet the state’s or PSOEAA’s criteria that would disqualify an applicant from employment, and that the private security industry would like to see revisions to PSOEAA that would allow employers greater access to the actual information returned in a FBI record check. The official said that the association plans to propose legislative changes in future congressional sessions to address these and other barriers. In 2005 and 2006, the Attorney General and others recommended expanding employer and third-party access to FBI criminal history record checks as a way to overcome barriers presented by the need for a state agency to adjudicate record check results. For example, according to a 2005 national task force report on criminal background checks, states faced challenges in conducting FBI record checks for employment purposes, which resulted in inconsistent use of records across the states. The task force made recommendations to state and federal policymakers regarding access to records for non-criminal-justice purposes, which included removing the federal requirement that a public agency must receive record check results. Senior officials from SEARCH and the FBI, as well as a state official we met with who participated on the task force, said that they were not aware of any specific actions that either Congress or DOJ took related to expanding record access as a result of the recommendations. Our discussions with officials from organizations representing employers for the various employment sectors we reviewed indicate that the access issues identified in the 2005 report are still of concern to employers today. U.S. Department of Justice, The Attorney General’s Report on Criminal History Background Checks. capacity allows—that private employers are authorized to use to inquire if an applicant or employee has a criminal history. Senior DOJ officials, a former official from the Attorney General’s office with direct knowledge of the report’s history, and SEARCH officials who work with states on related policy issues were not aware of any specific actions on these recommendations. Senior officials from the FBI’s Criminal Justice Information Services Division and officials from all 4 case study states raised some concerns that any attempt to expand access to FBI criminal records checks must consider. Specifically, FBI officials said that a primary concern is the extent to which nongovernmental entities would be able to adequately protect and store criminal history record information and the potential impacts on individual privacy rights if records were to be shared extensively beyond state agencies. The officials added that another concern is the potential resulting increase in the FBI’s workload in auditing these entities’ compliance with security policies regarding the storage, use, and dissemination of criminal record information. Senior officials from SEARCH and all 4 of our case study states noted that expanding access too broadly to nongovernmental entities could mean that state agencies could lose the fees collected for facilitating checks, thereby undermining the revenue streams that states use in turn to maintain and operate criminal history repositories. The SEARCH and Attorney General reports discussed above noted similar concerns with expanding access, and proposed some potential solutions that could balance expanded access with data security and applicant privacy concerns. For example, SEARCH recommended steps to improve the completeness and accuracy of criminal history records and protection of applicant privacy rights, through allowing individuals to access and correct their records, among other things. The task force also recommended expanding access only to organizations that appoint individuals to positions or responsibilities involving access to vulnerable populations, sensitive information, or as otherwise deemed necessary by the Attorney General for public safety or national security. In addition, to address concerns regarding information security, the Attorney General recommended that (1) criminal and civil penalties be established for those provided access under any new authority for the unauthorized use of criminal history information and (2) users of such information should enter into agreements that specify the requirements for access, including security of the information and notice to individuals concerning record access and correction and fair use of the information. Further, to address concerns about state fee revenues, the task force noted that any expansions in access should require authorized entities to go through state criminal history repositories for access—not directly to the FBI— unless states have specifically opted out of providing such access. According to BJS surveys of state criminal history information systems, from 2006 through 2012, states reported making progress in providing complete criminal history records to the FBI—records that include the arrest and the final disposition of the arrest. For example, BJS surveys show that the number of states that reported providing more than 75 percent of their arrest records with final dispositions increased from 16 states in 2006 to 20 states in 2012, as shown in figure 3. According to officials from BJS’s Statistical Planning, Policy, and Operations Division and senior officials from our 4 case study states, factors that help states compile complete criminal records include the automation of criminal record information—such as devices that digitally record and electronically transmit fingerprint images from police departments to state agencies that maintain criminal history records—and improved coordination among local criminal justice entities. For example, according to a director in the Florida Department of Law Enforcement, the high level of coordination among officials on the Florida Criminal and Juvenile Justice Information Systems Council has helped increase the completeness of state records because the members collectively decided on the best use of federal grant funding to improve state record completeness. Nevertheless, in 2012, 10 states reported that 50 percent or less of their arrest records had final dispositions. FBI officials noted that it is not possible for states to have 100 percent complete records because it can take more than 1 year for criminal felony cases to conclude and disposition information to be entered into criminal record systems. FBI officials also noted that the statement in the 2006 Attorney General’s report on criminal history background checks that only 50 percent of arrest records in the FBI’s Interstate Identification Index have final dispositions reflects a misunderstanding of how criminal history records are maintained. Rather, during an FBI criminal history record check, the FBI accesses certain records that states maintain that are not forwarded to the FBI. For example, some states forward arrest records to the FBI but not disposition information. For these states, during an FBI record check, the FBI reaches out to the state to obtain the arrest and disposition information from the state’s records. The impact of incomplete criminal history records on individuals seeking employment or licensing depends in part on whether a state’s laws permit employers or licensing agencies to hire applicants contingent upon the completion of a criminal record check. According to senior repository officials from our 4 case study states, 2 states permit contingent hiring for certain positions and 2 do not. For example, a manager within the Idaho State Police’s Bureau of Criminal Identification said that it could take months to obtain disposition information from other states, but that applicants are placed in certain jobs if they are supervised pending the results of the FBI record check. In contrast, a bureau chief within the California Department of Justice said that applicants cannot be hired or licensed until all aspects of the background check are completed, which includes following up on incomplete criminal records. A senior official from Washington’s Department of Social and Health Services Background Check Central Unit said that incomplete records can lead to negative impacts on the applicant, since the applicant is responsible for obtaining missing information from courts. The official added that when employers have urgent hiring needs, they may choose another qualified applicant rather than wait for an individual to gather court records that are needed to complete the FBI record check. According to a 2005 BJS report, complete records enable hiring entities to avoid delays due to the time needed to track down missing criminal record information. Senior officials from central record repositories at all 4 of the states we visited noted that incomplete criminal records returned from an FBI record check can result in a variety of challenges when screening an individual’s suitability for employment or licensing. For example, an official from 1 state said that because of limited staff and resources, criminal justice agencies in other states may not be responsive to requests for information on incomplete criminal records. The official noted that these agencies may also give a higher priority to addressing inquiries from law enforcement, further delaying responses to record inquiries for employment and licensing purposes. Repository officials from another state noted that it generally takes 1 or 2 days to finish an FBI criminal record check when no records are returned or the records are complete, but otherwise it can take up to several months, for example, to conduct the research needed to complete a record. Further, officials from the four record repositories said that state privacy laws—which can restrict the information that agencies are allowed to disseminate for non-criminal-justice purposes—can affect a state’s ability to obtain information. For example, officials in Washington State said that according to state law, they can disseminate a criminal record for non- criminal-justice purposes only if the record contains conviction information or arrest information that is less than 1 year old. Also, the officials said that it can be difficult to interpret whether records returned from another state would prohibit employment or licensing in the state where an individual is seeking employment, since state laws can define felonies and misdemeanors differently. The officials noted that these differences require following up with the state that generated the record, thus adding more time to the background check. DOJ has several programs designed to help states improve the overall quality of criminal history records—including the completeness of records—and officials from our 4 case study states said that they generally found DOJ’s assistance to be helpful. Our analysis of published reports and interviews with officials from our case study states, BJS, SEARCH, and the National Center for State Courts indicate that state challenges in submitting complete records to the FBI are generally inherent to local jurisdictions, and states have used DOJ’s assistance programs to help address these challenges. DOJ provides a number of different resources to help states improve criminal record completeness, including grant funding, sharing best practices, task forces, and audits. National Criminal History Improvement Program: DOJ assists states in improving the completeness, accuracy, and timeliness of criminal history records through the National Criminal History Improvement Program. For fiscal years 2008 through 2012, DOJ targeted approximately $23 million in NCHIP grants to state record disposition improvement projects, such as updating records that only contain arrests to include disposition information and upgrading and automating criminal history record systems to capture data on dispositions from courts and prosecutors. Senior officials from all 4 of our case study states reported that NCHIP grants have helped improve the quality and completeness of their criminal history records. For fiscal years 2008 through 2012, NCHIP grant funds ranged from $6 million to $11 million and averaged approximately $9.5 million per year. Appropriations for NCHIP for fiscal year 2014 were at $46.5 million. This was primarily intended to support state efforts to increase the number of felony records and criminal-related mental health records available for firearm background checks through the National Instant Criminal Background Check System. BJS officials who administer the NCHIP grants said that an increase in felony records available for firearm checks will also benefit non-criminal-justice checks because the FBI searches the Interstate Identification Index, which stores felony records for both types of checks. Best practices: DOJ has also worked to help states improve record completeness by sharing best practices through informational websites and reports, among other avenues. For example, under a DOJ grant, the National Center for State Courts is creating a web-based tool kit that brings together information from state pilot projects, focus groups, and other research reports to identify, among other things, best practices on how to overcome disposition reporting and coordination challenges among state and local criminal justice agencies. Also, under DOJ’s funding and direction, SEARCH is implementing the State Repository Records and Reporting Quality Assurance Program, which includes a voluntary self-assessment checklist for states as a way to disseminate best practices. According to a director at SEARCH, after a state completes the checklist, a SEARCH official provides on-site technical assistance to review the responses and recommend additional state follow-up actions. The official noted that, as of September 2014, SEARCH officials had provided on-site technical assistance in 20 states. The official said that the program will continue under BJS grant funding in order to provide on-site technical assistance to additional states, continue improving the checklist, and incorporate new standards that states need to meet in order to utilize the FBI’s technology advancements related to criminal record information. Disposition Task Force: The FBI’s Advisory Policy Board formed the Disposition Task Force in 2009 to address issues related to the completeness, accuracy, and availability of criminal record dispositions from courts and prosecutors and develop a national strategy for improving the quality of disposition reporting. The task force is composed of representatives from different components of state and local criminal justice systems—including state repositories, state courts, prosecutors, and Compact Council members—as well as federal criminal justice officials, such as from DOJ and OPM. According to an FBI official who helps facilitate task force meetings, the task force established an initial set of goals in 2009, but under new leadership in 2012 determined that these goals would not address the greatest disposition-reporting challenge—the lack of national disposition-reporting standards. As a result, the FBI official noted that the task force decided to take a broader look at disposition-reporting issues, and evolved its initial goals into five broader goals and the foundation of a national strategy. According to FBI officials, as of September 2014, the task force had achieved one of its 2012 goals by refining the calculation that the task force would use to estimate the rate in which state and federal arrest records contained dispositions and reaching consensus on the definition of the term “disposition” to calculate the disposition rate. The officials noted that the task force had also taken steps to achieve two other goals by (1) reviewing the results of a National Center for State Courts national survey to identify existing federal and state requirements for collecting and reporting disposition information, and (2) identifying steps to develop and produce a guide on disposition best practices. The task force, however, did not have a plan with time frames or milestones for either completing the best practices guide or achieving the remaining goals, which could also lead to a national strategy—an original 2009 objective for the task force. Our work indicates that the task force has not formulated such plans or set time frames and milestones in part because of the changes in leadership and goals in 2012. Nevertheless, after more than 5 years, the task force has not issued best practices or national standards for collecting and reporting disposition information or developed a national strategy, even though disposition reporting has been a long-standing challenge. Establishing plans with time frames and milestones could help hold the task force accountable for more progress in achieving the goals and the overall results of improved disposition reporting. Taking these steps would also be consistent with program management standards that call for specific goals and objectives to be conceptualized, defined, and documented in the planning process, along with the appropriate steps, time frames, and milestones needed to achieve those results. FBI audits of states: The FBI conducts a triennial audit of state criminal justice information systems to determine, among other things, whether (1) the records the state maintains contain all known arrest and disposition information and (2) the submission of criminal record information to the FBI has been “unduly” delayed. Federal regulations provide that states should submit dispositions to the Interstate Identification Index within 120 days after the disposition occurred. meeting these two requirements, FBI auditors review state-level processes and procedures and assess, among other things, if the state repository has a backlog of dispositions that it has not submitted to the FBI. The FBI found that from 2011 through 2013, 12 of the 44 states that To determine whether states are it had audited were noncompliant with one or both of the requirements. 28 C.F.R. § 20.37. For example, a 2012 FBI audit of 1 state found that the state was submitting dispositions to the FBI only twice a year. In response to noncompliant audit findings, states are required to submit a corrective action plan to the FBI describing how the state plans to come into compliance with audit requirements. In addition to the lack of national standards that govern the submission of dispositions from state criminal justice agencies and repositories to the FBI, our discussions with officials from our 4 case study states, BJS, SEARCH, and the National Center for State Courts—and our review of reports that these entities published—identified three challenges as most frequently cited as negatively affecting the completeness of state criminal records: (1) prosecutors not reporting final decisions in a case, (2) lack of official arrest records when law enforcement cites and then releases an individual, and (3) case numbers not transferring accurately among local agencies. DOJ’s grant funding and other assistance programs have helped states address these challenges. Prosecutors not reporting final case decisions: According to officials from DOJ and our case study states, one of the major contributors to arrest records not having final dispositions occurs when prosecutors decline to prosecute an individual but do not report this information to the state’s central records repository. Prosecutors may decline to prosecute an individual for a variety of reasons, such as insufficient evidence or the low severity of the offense. Prosecutors also have the authority to offer plea bargains, which reduce the seriousness of a charge in return for a guilty plea or other forms of cooperation with the prosecution. Prosecutors cited excessive workload and the lack of technology and human resources as reasons why they did not report declinations to prosecute, according to a 2005 BJS survey.decisions that can lead to an arrest record without a disposition include decisions to consolidate a case into another case and to close a case that has become dormant because of insufficient evidence or witnesses, among other things. When not reported, other prosecutorial Fingerprints not collected under cite-and-release practices: Incomplete criminal history records can also result from law enforcement officials citing and releasing individuals without formally arresting and fingerprinting them. This can result in state and local courts submitting dispositions to a state’s central records repository without a corresponding arrest record because the individual was never fingerprinted. Typically, states allow citation and release for misdemeanor offenses, but according to the National Conference of State Legislatures, at least 2 states permit citation and release for some felonies. Cite-and- release policies can result in a significant number of incomplete criminal history records. For example, a senior official from 1 of our case study states said that cite-and-release arrests were one of the practices that contributed to approximately 1.6 million dispositions that are not linked to an arrest, which the state keeps in an independent data system and is working to match up with the corresponding arrest records. According to the National Conference of State Legislatures, cite-and- release arrests are a common practice for law enforcement agencies and are useful to these agencies. These arrests can lower jail populations and reduce costs by releasing arrestees who pose little risk to public safety. According to officials from our 4 case study states and a national focus group convened by the National Center for State Courts, mobile “live scan” devices that digitally record and electronically transmit fingerprint images or live scan devices in courtrooms could help improve the completeness of criminal history records. Courts can use such devices to immediately fingerprint individuals upon arrival in court for the citation hearing. However, a senior official from one of our case study states and a senior official from the National Center for State Courts said that local criminal justice agencies face significant barriers—such as the lack of resources and difficulty of integrating live scan devices into existing courtroom procedures. Case numbers not transferring among local agencies: Senior officials in 3 of our 4 case study states said that they faced challenges in transferring unique case control numbers among local criminal justice agencies—such as law enforcement agencies, courts, prosecutors, and the state record repository. Law enforcement typically generates the case control number when an individual is arrested and fingerprinted, and some states use the number to associate all subsequent criminal history information from criminal justice entities with the original arrest event. According to the state officials, the process to transfer the case control number among local criminal justice entities may be manual and therefore prone to errors or occur inconsistently. For example, officials from 1 state said that certain local agencies that make arrests write case control numbers on a white board, and the numbers do not always get transferred to prosecutors and courts. A disposition-reporting focus group convened by the National Center for State Courts proposed that local and state governments develop policies that identify the case control number and specify that this number should be maintained in all criminal justice systems. DOJ’s assistance programs—such as best practice dissemination programs and NCHIP grant funding—have helped states address challenges in providing complete criminal records. For example, sections of the Quality Assurance Program’s checklist address state practices regarding prosecutors failing to report declinations to prosecute, cite-and- release arrests, and the transfer of case numbers among local agencies. Further, the National Center for State Courts’ web-based tool kit contains information on the impact that each of these challenges has on the completeness of criminal records as well as potential solutions to overcome these challenges. Additionally, states have used NCHIP grants to help overcome these challenges. For example, in fiscal year 2013, 1 state received NCHIP grant funds to implement the electronic transfer of prosecutorial case management information to the state’s court system, and another state used NCHIP grant funds to automate transferring the case control number from some prosecutors to the courts. In June 2010, the Compact Council and the FBI’s Advisory Policy Board approved the practice of having the FBI supply states with source documents that OPM personnel obtain during their investigations of applicants for federal employment and security clearances. The information contained in these source documents, such as arrest dispositions, could help to enhance the completeness of state criminal history records. The agencies did not enter into a formal written agreement for this information-sharing arrangement, but it was discussed and recommended in Advisory Policy Board meeting minutes. According to FBI and OPM officials, each week, OPM is to provide criminal justice-related information to the FBI, such as disposition information related to an applicant’s arrest records. The FBI would then review the information and send any relevant information to state record repositories so that the states could decide whether to update their records. OPM began sending this information to the FBI in January 2011. According to OPM officials, OPM sends approximately 3,500 to 4,500 investigative records to the FBI each week, with each record representing state or local criminal record information obtained by an OPM investigator. According to officials from the FBI’s Criminal Justice Information Services Division, the FBI has not been able to utilize any of the information that OPM has provided since 2011 because OPM has not provided the source documents uncovered during OPM’s investigations, such as a copy of a court record. Instead, OPM provided the FBI with information derived from its final investigative reports, which can include the results of OPM investigators’ phone or in-person conversations with court officials or other state criminal justice officials, among other things. According to OPM officials, OPM informed the FBI during briefings prior to when it started sending information to the FBI that OPM investigators generally do not collect source documents as part of their investigations and would not be able to do this on a routine basis. OPM officials noted that there may have been a misunderstanding with the FBI regarding the term “source” as to whether the FBI required an original court record. In October 2014, senior FBI officials said that they had had recent discussions with OPM officials to determine what, if any, criminal record information that OPM collects could be provided to the FBI to meet the FBI’s requirement for source documents. A senior OPM official noted that these discussions included an FBI request for OPM to change how it provided the disposition information to the FBI to better support sorting of the information. The official added that OPM’s initial assessment of the FBI’s request was that it is most likely feasible. Further, the official noted that OPM had been engaged in a dialog with the FBI regarding its request and was researching the possibilities as the FBI further defined what it needed from OPM. Prior GAO work has found that collaborative activities—such as the one between the FBI and OPM—benefit from By clarifying agreeing upon decisions to achieve desired outcomes.what disposition information OPM will provide to the FBI and formally agreeing on how OPM will provide it, the FBI would be able to forward the information to states. This would allow each state to determine if the information can be used to update their criminal history records. FBI audits of the states’ use of criminal history records conducted from 2011 through 2013 show that 44 states went through an audit within these 3 years, and 31 of the 44 states (about 70 percent) had at least one state agency that was out of compliance with federal regulations related to applicant notifications. Specifically, the agency did not provide all of the required notifications to a job or license applicant on the individual’s rights to challenge and correct that person’s criminal history records.According to FBI audit management officials, state agencies did not provide the required notifications primarily because the agencies were not aware that they had to do so. According to federal regulations: Officials at governmental institutions and other entities that are authorized to submit fingerprints and receive FBI identification records, including criminal history records, must notify the individuals that their fingerprints will be used to check FBI criminal history records. Officials making the determination of suitability for employment or licensing must provide applicants the opportunity to complete or challenge the accuracy of information contained in the FBI records. Officials making suitability determinations must also advise applicants that procedures for obtaining a change, correction, or update to FBI identification records are set forth in 28 C.F.R. § 16.34. Officials making employment and licensing determinations should not deny employment or licenses based on information in the record until the applicant has been afforded a reasonable time to correct or complete the record, or has declined to do so. On the basis of our analysis of FBI audit results, the two notifications that state agencies most frequently did not provide to applicants were (1) that the applicant’s fingerprints would be used to check FBI criminal history records, and (2) the process for changing or updating FBI records. For each audit finding related to applicant notifications, the FBI is to make a recommendation to the state that addresses the finding. The state in turn is to respond in writing with a description of the state’s plans to address the FBI’s recommendation, including how the state will correct its practices to ensure compliance with the audit requirements. The Compact Council or FBI may also require the state to provide additional information or updates on the state’s progress in addressing the FBI’s recommendations. According Compact Council and FBI officials, the Compact Council and the FBI have educated states on the applicant notification requirements through different methods, including biannual Compact Council meetings, a communication notice from the FBI to states in 2010, and during the FBI’s triennial audit of states. Additionally, from May through August 2012, the Compact Council disseminated documents to states that are affiliated with the Compact Council via e-mail and at FBI Advisory Policy Board meetings that, among other things, describe (1) applicant rights to challenge and correct their criminal records during a FBI record check, and (2) the states requirement to notify applicants of these rights.FBI also published the information from these documents on the FBI’s website. FBI officials noted that these documents have been widely distributed to the states and are now provided as training tools during audits. Therefore, the FBI expects that audit findings regarding the provision of applicant notice may improve in the future. Despite the FBI’s audit process and the FBI’s and Compact Council’s efforts to educate states on the applicant notification requirements, FBI audit findings show that states generally do not provide all of the required applicant notifications. Specifically, the FBI finalized audits for 14 states after August 2012—when the Compact Council disseminated the documents to states—and 13 of the 14 states had at least one agency out of compliance with the federal notification requirements. Internal control standards note that an agency’s management should ensure that audit findings are resolved, and that separate evaluations of control activities that are designed to ensure compliance with regulations can be useful to determine their effectiveness. reasons why states continue to fail to comply with applicant notification requirements could help the FBI and Compact Council revise the methods they use to educate states and achieve compliance, thereby helping the FBI and states ensure that applicants are aware of their rights to challenge and correct their criminal history records. GAO, Standards for Internal Control in the Federal Government, GAO-AIMD-00-21.3.1 (Washington, D.C.: Nov. 1, 1999). The exact number of private companies that conduct criminal record checks, the number of checks conducted each year, and the number of employers and industries requesting checks are generally unknown, but appear to be increasing. According to a 2005 SEARCH report on criminal background checks—the most recent report DOJ has funded on this issue—in addition to a few large industry players, there are hundreds, perhaps even thousands, of regional and local background check companies that conduct criminal record checks. Management officials from the FTC, EEOC, and two industry associations we contacted said that they believed the industry is growing because of employer demand for such checks. For example, according to a senior official from the Consumer Data Industry Association—a trade association that represents private background screening companies and other companies that compile data on consumers—new companies that perform criminal records checks are regularly forming due in part to employers’ increasing demand for background checks, as well as the availability of online criminal history records and publicly available databases of court records. The 2005 SEARCH report also noted that private background check companies can offer benefits that government agencies are not always able to provide, including collecting and consolidating criminal justice information from multiple sources, achieving faster response times than state agencies, and creating reports that include non-criminal-justice information. For example, in addition to an applicant’s criminal history record, private companies can search other sources of information to help employers assess an applicant’s suitability for employment, including public records (e.g., real estate records, liens, and motor vehicle registrations) and nonpublic information related to an individual’s credit history (mortgages, auto loans, and student loans). Information provided to us by a senior official from the Consumer Data Industry Association in September 2014 cited similar benefits that private background check companies can provide. At the federal level, the Federal Trade Commission and the Consumer Financial Protection Bureau are responsible for, among other things, enforcing provisions of the Fair Credit Reporting Act. FCRA provisions require consumer reporting agencies to maintain reasonable procedures designed to avoid violations of requirements relating to information that may not be contained in consumer reports, to limit furnishing consumer reports to the permissible statutory purposes, and to assure maximum possible accuracy of the information concerning the individual referenced in the report. In addition, generally under FCRA, if an employer intends to take an adverse action on an employee or applicant based in whole or in part on a consumer report, the employer must first provide that person with a copy of the report and a description in writing of that person’s rights under FCRA. According to senior FTC and CFPB officials, the agencies can take law enforcement action in connection with alleged FCRA violations through filing civil lawsuits in federal courts or through settlements with companies. In addition, the FCRA contains provisions that generally allow for a civil action to address certain FCRA violations to be brought in an appropriate United States district court or another court of appropriate jurisdiction within specified time frames.FCRA does not require private criminal background check companies to submit to federal audits or provide disclosure statements on their activities. FTC officials stated that the According to FTC officials, from fiscal years 2009 to 2014, the FTC settled 16 complaints against private background screening companies and employers for alleged FCRA violations involving information that private background check companies reported. Of the 16 complaints, 4 included allegations that related to the use of criminal record information in employment matters, such as not following reasonable procedures when providing information to employers or not providing proper notice to employees under FCRA provisions on how the information will be used. For example, in 1 complaint, the FTC alleged that a private background company failed to follow reasonable procedures to prevent the company from including the same criminal offense information in a consumer report multiple times, failed to follow reasonable procedures to prevent the company from providing obviously inaccurate consumer report information to employers, and in numerous cases provided the records of the wrong person to employers. The FTC alleged that these failures led to consumers being denied employment or other employment-related benefits. The private background company agreed to settle with the FTC by paying a civil penalty and is barred from continuing the practices that the FTC identified as violating the FCRA. CFPB also accepts complaints regarding consumer financial products and services within its jurisdiction. According to senior CFPB officials, the bureau forwards those complaints directly to the relevant companies for a response. The CFPB officials noted that they have not received many consumer complaints regarding the use of criminal history records in employment background checks. The officials said that consumers may not think to contact CFPB with such complaints because consumers may think that criminal background checks are outside of CFPB’s jurisdiction since the complaints are not “financial” in nature, even though CFPB has had jurisdiction to enforce most FCRA provisions since 2011. As of October 2014, CFPB had not brought any FCRA enforcement actions against private companies related to the use of criminal history information in employment background checks. In addition, the Equal Employment Opportunity Commission enforces Title VII of the Civil Rights Act of 1964, which makes it illegal to discriminate in employment against a job applicant or employee on the basis of race, color, religion, national origin, or sex. In general, there are two ways in which an employer’s use of criminal history records may violate Title VII—disparate treatment and disparate impact. Under disparate treatment, an employer may face liability for discrimination if an employer treats criminal history information differently for different applicants or employees based on a Title VII-protected characteristic, such as race or national origin. Under disparate impact, if an employer’s neutral employment practice (e.g., excluding any applicant from employment based on certain criminal conduct) disproportionately harms individuals based on race or national origin, the policy will violate the law if it is not job related and consistent with business necessity for the position in question. For example, in fiscal year 2012, a large employer agreed to pay a monetary penalty and make major policy changes to resolve an EEOC administrative charge. Specifically, under the company’s former background check policy, the company did not hire job applicants for permanent jobs if the applicants had been (1) arrested and were pending prosecution but were never convicted of an offense, or (2) arrested or convicted of certain minor offenses. The EEOC investigation revealed that this policy operated to disproportionately deny permanent employment to African-Americans, and found reasonable cause to believe that the policy was discriminatory under Title VII of the Civil Rights Act of 1964. In addition to enforcing the FCRA and Title VII of the Civil Rights Act of 1964, federal agencies have taken actions to help ensure industry compliance with, and consumer awareness of, employers’ and private background companies’ use of criminal history records. For example, according to senior EEOC officials, because of the increased ease of employers’ access to criminal history record information, in 2012, EEOC updated its guidance on the use of criminal records in employment decisions.use criminal history information—such as conviction records—to make nondiscriminatory employment decisions and to ensure that the employer uses the information for legitimate job-related purposes. For example, the guidance states that the fact of an arrest does not establish that criminal conduct has occurred, and excluding an applicant based on an arrest, in itself, is not job related and consistent with business necessity. The guidance notes, however, that an employer may make an employment decision based on the conduct underlying an arrest if the conduct makes the applicant unfit for the position in question. The guidance provides information on how an employer may The guidance also suggests examples of best practices that employers may adopt on the use of criminal history information to make employment decisions. One example from the guidance suggests that employers develop a narrowly tailored written policy and procedure for screening applicants and employees for criminal conduct that (1) identifies essential job requirements and the actual circumstances under which an applicant would perform the jobs, and (2) determines the specific offenses that may demonstrate an individual is not fit for performing such jobs. In addition, EEOC and the FTC jointly published employer guidance on how to comply with federal requirements when an employer receives background check information from private background screening companies. example, the guidance states that if an employer is going to get criminal history and other background information from a company that is in the business of compiling such information, the employer must first get an applicant’s or employee’s written permission to do the check. EEOC and FTC. Background Checks: What Employers Need to Know, 2012. accessible for private companies to search. The report added that states and state agencies that do make their criminal history records accessible to the public may only periodically update these records, which may affect the information the private background companies access. Senior officials from the Washington State Patrol who maintain the state’s criminal record repository said that the state provides a subscription service to private vendors for access to public records, but that the state updates the records only every few months. Also, private companies generally conduct name-based checks (versus fingerprint-based checks), which can decrease the accuracy of the information that the check produces. According to the Attorney General’s 2006 report, name-based checks can result in false positives—which can occur when a person with a common name is associated with another person’s records—and false negatives, which can occur when a search misses a record because of errors in the record or in the information used to initiate the search. According to CFPB officials, private background check companies can use additional identifiers—such as date of birth— when conducting checks in order to help mitigate inaccurate search results. We have also reported that using personal identifying information in addition to an individual’s name when conducting a check, such as the person’s date of birth, can minimize false positives and false negatives.The stakeholders we contacted did not have information on the extent to which private companies use additional identifiers when conducting checks. Related to the accuracy of private company checks, senior officials from two private sector screening companies we interviewed raised concerns about FCRA’s “contemporaneous notice” provision and its potentially negative effects on employees and applicants. In general, under FCRA, a consumer reporting agency that provides a consumer report for employment purposes that contains public record information and is likely to have an adverse effect on an individual’s ability to obtain employment is required to either (1) notify the individual that is the subject of the report that the public record information is being reported and of the name and address of the person receiving the information or (2) maintain strict procedures designed to insure that the public record information reported is complete and up to date.employee that a company is reporting public record information to the employer relieves the consumer reporting agency from ensuring that criminal record information provided to an employer is accurate. The officials did not have data or other information on how this provision has affected employees and applicants. Employers’ increasing use of criminal history record checks to determine applicants’ suitability for employment, licensing, or volunteering underscores the need for accurate and complete criminal records— including the final disposition of any criminal charges—and assurances that applicants have an opportunity to challenge or correct potentially inaccurate records. DOJ components have taken a range of actions to help state and local agencies improve the accuracy and completeness of their criminal history records and address related challenges. However, the FBI Advisory Policy Board’s Disposition Task Force has been in existence since 2009, but it has not issued best practices or national standards for collecting and reporting disposition information or developed a national strategy for improving the quality of disposition reporting, as intended. Establishing a plan with time frames and milestones could help the task force achieve its remaining goals and help improve disposition reporting. In addition, for more than 3 years, the FBI has received but not used disposition information from OPM to potentially help states enhance the completeness of their criminal history records. It is important that the FBI and OPM clarify what disposition information that OPM collects will be provided to the FBI and formally agree on how OPM will provide it. This would enable the FBI to forward the information to states and allow each state to determine if the information can be used to update their criminal history records. Finally, although the FBI and the Compact Council have taken steps to educate states on the regulatory requirement that they notify applicants of their right to challenge and correct the information in their criminal history records, FBI audits of state and local agencies’ use of criminal history records consistently show that states do not notify all applicants as required. Taking additional action to determine why states do not comply with this requirement could help the FBI and the Compact Council revise their educational programs and achieve compliance, thereby helping to ensure that applicants are aware of their rights to challenge and correct their criminal history records. We are making the following three recommendations: To improve disposition reporting that would help states update and complete criminal history records, we recommend that the Director of the FBI task the FBI Advisory Policy Board to establish a plan with time frames and milestones for achieving its Disposition Task Force’s stated goals. To potentially help states enhance the completeness of their criminal history records, we recommend that the Director of the FBI and the Director of the Office of Personnel Management clarify what disposition information OPM will provide to the FBI and formally agree on how OPM will provide it. This would enable the FBI to forward the information to states and allow each state to determine if the information can be used to update their criminal history records. To better equip states to meet the regulatory requirement to notify individuals of their rights to challenge and update information in their criminal history records, and to ensure that audit findings are resolved, we recommend that the Director of the FBI—in coordination with the Compact Council—determine why states do not comply with the requirement to notify applicants and use this information to revise its state educational programs accordingly. We provided a draft of this report to DOJ and OPM for their review and comment. OPM provided written comments, which are reprinted in appendix IV. DOJ concurred with all three recommendations in this report in an e-mail provided on January 13, 2015. In its written comments, OPM concurred with the one recommendation that was directed to the office. Specifically, the recommendation calls for the FBI and OPM to clarify what disposition information that OPM collects as part of its background investigations will be provided to the FBI and formally agree on how OPM will provide it. OPM noted that preliminary discussions between the FBI and OPM indicate that the disposition data in OPM’s reports of investigations may be useful to the FBI in identifying records in its system that are lacking dispositions but that contain a disposition at the local level. OPM added that it has been researching internal technical strategies that will provide specific data fields to the FBI that can be formatted and sorted in a manner best suited to the FBI’s needs. OPM noted, however, that the format in which OPM collects and maintains data is necessarily oriented toward fulfilling the agency’s assigned mission. OPM added that it is not tasked with the authority to perform criminal justice record management functions for the FBI or criminal justice assistance functions for the states and localities. DOJ and OPM also provided technical comments, which we incorporated in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General, the Director of the Office of Personnel Management, and appropriate congressional committees. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report addresses the following questions: To what extent do states conduct Federal Bureau of Investigation (FBI) criminal history record checks for selected employment sectors and what challenges, if any, do they face in conducting these checks? To what extent have states made progress in improving the completeness of criminal history records and what challenges remain that federal agencies can help mitigate? To what extent do private companies conduct record checks, what benefits do they provide, how are they regulated, and what challenges do they face? Regarding the extent to which states conduct FBI record checks and related challenges, we assessed the extent to which states were conducting checks—either under state statutes or regulations, or under federal authorities—for employment and volunteer positions covered by three federal laws. Specifically, the National Child Protection Act of 1993, the Edward M. Kennedy Serve America Act, and the Private Security Officer Employment Authorization Act of 2004. We selected these laws to represent a range of factors, including variation in whether the law requires or authorizes (permits) an FBI record check, different employment sectors covered (i.e., nonprofit, private, or public employment), and variation in paid versus volunteer positions. In addition, we conducted a web-based survey of officials at agencies within all 50 states and the District of Columbia that maintain criminal history records (state repositories) to determine the extent to which states are conducting FBI checks for the employment sectors covered under the three federal laws. We conducted the survey from July 29, 2014, to September 30, 2014. We received a response rate of 94 percent—47 states and the District of Columbia—which we collectively refer to as states throughout this report. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling error. To ensure our survey questions were accurate, understandable, and unbiased, we pretested our survey instrument with officials in 3 states—California, Idaho, and Washington. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the questionnaire after the pretests and independent review. To ensure the validity of the responses, we reviewed survey responses to ensure logic and consistency in the responses. We also analyzed federal regulations and procedures for conducting criminal record checks and evaluated previously published reports from SEARCH, the Department of Justice (DOJ), and other organizations regarding the national availability of FBI background checks, solutions proposed to address access challenges, and what challenges remain.supplement information obtained through our national survey and our analysis of previously published reports, we conducted semistructured interviews with management officials from repositories and courts that maintain criminal history information in 4 case study states—California, Florida, Idaho, and Washington—to determine the extent to which they conduct FBI checks, any challenges faced with conducting checks, and actions taken to address those challenges. We selected the 4 states based on geographic location and other factors, including participation in the Compact Council—the primary state and federal body for setting policy regarding the interstate sharing of criminal history records for non- criminal-justice purposes. We interviewed FBI officials with responsibility for managing the Interstate Identification Index—the national system for the interstate sharing of criminal history records—to determine any challenges employers face in obtaining access to checks, and any challenges states face in adjudicating records on behalf of employers. Further, we interviewed management officials from the National Mentoring Organization, the National Center for Missing and Exploited Children, the Corporation for National and Community Service—the federal agency that oversees service programs such as AmeriCorps and Senior Corps—and the National Association of Security Companies—to obtain their views on the availability of FBI criminal record checks and any challenges in obtaining access. To better understand state legal and policy challenges regarding access to background checks, we interviewed officials with SEARCH and attended a November 2013 meeting of the Interstate Compact Council, where a wide range of issues related to the non-criminal-justice use of criminal history records were discussed. Regarding the progress states have made in improving the completeness of criminal history records and related challenges, we analyzed data that states provided to DOJ via a survey from fiscal years 2006 through 2012 on the percentage of their arrest records that contained information on the disposition of those arrests. We selected this time frame because 2006 was the year the Attorney General issued the criminal record background check report and 2012 was the year with the most current available survey data. To assess the reliability of the data, we analyzed the survey methodology, interviewed DOJ officials who conducted the surveys, and examined data for obvious errors. We determined that the data were sufficiently reliable for the purposes of this report. We also analyzed the results of the FBI’s most recent round of triennial state audits, which include assessing the completeness of state records and use of the records for non-criminal-justice purposes. As of January 2014, the FBI had finalized 44 state audits that the FBI conducted from 2011 through 2013. Further, we interviewed officials who maintain criminal history records in our 4 case study states to determine challenges they face in maintaining complete records and related initiatives to improve record completeness. We also interviewed officials from the FBI and DOJ’s Bureau of Justice Statistics (BJS) who have key roles in providing access to national criminal history records and providing assistance to states in maintaining complete records. In addition, we interviewed officials from the National Employment Law Project to discuss the potential impacts that incomplete criminal records have on job applicants. Further, we interviewed officials from the Office of Personnel Management (OPM) who collect disposition information as part of OPM background investigations. Regarding what is known about the role of the private sector in conducting employment-related background checks, we reviewed relevant sections of the Fair Credit Reporting Act (FCRA) and Title VII of the Civil Rights Act of 1964, laws that govern the use of criminal history records and that regulate background checks conducted by private background screening companies. We analyzed SEARCH’s 2005 report on the commercial sale of criminal justice record information and a 2006 Attorney General’s report on criminal history background checks.analyzed guidance prepared by the Equal Employment Opportunity Commission on the use of criminal history record information in employment decisions in order to better understand what challenges employers, applicants, and consumer reporting agencies face in using criminal history record information. We also interviewed senior officials from associations that represent background screening companies, including the National Association of Professional Background Check Screeners and the Consumer Data Information Association, to determine the role of private sector agencies in providing criminal history information to employers. Further, we interviewed senior officials from federal agencies that regulate these private sector entities—including the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Consumer Financial Protection Bureau—to determine how the industry is regulated as well as the size and scope of the industry. We conducted this performance audit from October 2013 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Examples of Federal Laws Authorizing State Access to FBI-Maintained Criminal History Records for Non-Criminal- Justice Employment and Licensing Purposes Description Allowing expenditure of funds for the Federal Bureau of Investigation (FBI) to be used for the exchange of identification records, including criminal history record information, with officials of state and local governments for purposes of employment or licensing if authorized by state statute and approved by the Attorney General. Allowing authorized employers to submit to the state identification bureau of a participating state, fingerprints or other means of positive identification, as determined by the Attorney General, of an employee or applicant for employment as a private security officer. For conducting criminal history checks of individuals selected to serve in a position in which the individuals receive a living allowance, stipend, national service educational award, or salary through a program receiving assistance under the national service laws. Permitting states to have in effect procedures requiring qualified entities designated by the state to contact an authorized agency of the state to request a nationwide background check for the purpose of determining whether an individual has been convicted of a crime that bears upon that individual’s fitness to have responsibility for the safety and well-being of children, the elderly, or individuals with disabilities. Relating to promulgation of regulations by the Attorney General to address the minimum standards for background checks, including criminal background checks, and pre-employment drug testing for potential employees involved in the transportation of violent prisoners in or affecting interstate commerce in the private prisoner transport industry. Relating to the fingerprinting and criminal background check of individuals involved with the provision to children under the age of 18 of child care services for each federal agency or facility operated by the federal government that hires such individuals. Provides for the Attorney General, upon request of the chief executive officer of a state, to conduct fingerprint-based checks of the national crime information databases pursuant to a request submitted by a private or public elementary or secondary school, a local educational agency, or state educational agency, on individuals employed by or under consideration for employment by, or otherwise in a position in which the individual would work with or around children in the school or agency. For use of officials of the National Indian Gaming Commission in conducting background checks on key employees and primary management officials. Upon the request of a state regarding the issuance of a license to operate a motor vehicle transporting in commerce a hazardous material to an individual, the Attorney General shall carry out a background records check, including a check of the relevant criminal history databases, regarding the individual and notify the Secretary of Homeland Secretary regarding the results. The Commodity Futures Trading Commission is authorized to register futures commission merchants, associated persons of futures commission merchants, introducing brokers, associated persons of introducing brokers, commodity trading advisors, associated persons of commodity trading advisors, commodity pool operators, associated persons of commodity pool operators, floor brokers, and floor traders upon application in accordance with rules and regulations and in the form and manner to be prescribed by the commission, which may require the applicant, and such persons associated with the applicant as the commission may specify, to be fingerprinted and to submit, or cause to be submitted, such fingerprints to the Attorney General for identification and appropriate processing. Description A nursing facility or home health care agency may submit a request to the Attorney General (through the appropriate state agency or agency designated by the Attorney General) to conduct a search of the records of the Criminal Justice Information Services Division of the Federal Bureau of Investigation for any criminal history records corresponding to the fingerprints or other identification information submitted regarding an applicant for employment if the employment position is involved in direct patient care. The Under Secretary of Transportation for Security shall require that an individual to be hired as a security screener undergo an employment investigation (including a criminal history record check) under 49 U.S.C. § 44936(a)(1). An association of state officials regulating pari-mutuel wagering, designated by the Attorney General, may submit fingerprints to the Attorney General on behalf of any applicant for a state license to participate in pari-mutuel wagering. In response to such a submission, the Attorney General may, to the extent provided by law, exchange, for licensing and employment purposes, identification and criminal history records with state governmental bodies to which such applicant has applied. Every member of a national securities exchange, broker, dealer, registered transfer agent, registered clearing agency, registered securities information processor, national securities exchange, and national securities association, shall require that each of its partners, directors, officers, and employees be fingerprinted and shall submit such fingerprints, or cause the same to be submitted, to the Attorney General for identification and appropriate processing. In providing identification and processing functions, the Attorney General shall provide the Securities and Exchange Commission and self-regulatory organizations designated by the commission with access to all criminal history record information. Summary of federal complaint According to the FTC’s complaint, a private background company failed to follow reasonable procedures to prevent the company from including the same criminal offense information in a consumer report multiple times, failed to follow reasonable procedures to prevent the company from providing obviously inaccurate consumer report information to employers, and in numerous cases even included the records of the wrong person to employers. The FTC alleged that these failures led to consumers being denied employment or other employment-related benefits. Outcome The private background company agreed to settle with the FTC by paying a civil penalty and is barred from continuing the practices that the FTC identified as violating the Fair Credit Reporting Act (FCRA). According to the FTC’s complaint, a private background company obtained, and provided employers with, information about job applicants, including possible criminal records of applicants on the National Sex Offender Registry. The FTC claimed the company violated the FCRA by failing to use reasonable procedures to assure maximum possible accuracy of the information and failing to provide written notices to applicants that the company reported public record information to prospective employers that may adversely affect the applicant’s ability to obtain employment. The private background company agreed to settle with the FTC by maintaining reasonable procedures to (1) assure the maximum possible accuracy of information provided in background checks, and (2) notify consumers when the company has provided public information about them that is likely to have an adverse affect upon their ability to obtain employment. According to the FTC complaint, a private background company offered an online service allowing employers to purchase background reports that contain, among other information, arrest and conviction records. The FTC claimed that the background company violated several provisions of the FCRA, including failure to maintain reasonable procedures that the information provided was used for a permissible purpose and failure to use reasonable procedures to assure maximum possible accuracy of information provided to employers. The private background company agreed to settle with the FTC and pay a civil penalty. In addition, the settlement barred the private background company from continuing the practices that the FTC identified as violating the FCRA. According to the FTC’s complaints, two employers contracted with a private background company to conduct background checks that included, among other information, criminal history records. The employers used the results of the background checks as a basis for hiring applicants or retaining employees, and throughout the course of a year, took adverse action against numerous job applicants by denying employment to them. The FTC claimed that the employers violated the FCRA by, among other things, failing to provide the employees and applicants with notices before taking adverse actions. Providing such notices would have allowed the applicants and employees to dispute the accuracy of the background checks. The two employers agreed to settle with the FTC and both paid civil penalties. In addition, the settlements required the employers to provide FCRA-required notices to applicants and employees in the future. The settlements also contain record-keeping and reporting provisions to allow the FTC to monitor compliance. David C. Maurer, (202) 512-9627 or maurerd@gao.gov. In addition to the contact named above, Eric Erdman (Assistant Director), Joanna Chan, Willie Commons III, Charlotte Gamble, Eric Hauswirth, Brandon Jones, Jill Lacey, Eileen Larence, Winchee Lin, Linda Miller, Jessica Orr, Martene Rhed, Tovah Rom, and Cynthia Saunders made key contributions to this report.
Authorized employers use information from FBI criminal history record checks to assess a person's suitability for employment or to obtain a license. States create criminal records and the FBI facilitates access to these records by other states for nationwide checks. GAO was asked to assess efforts to address concerns about incomplete records, among other things. This report addresses to what extent (1) states conduct FBI record checks for selected employment sectors and face any challenges; (2) states have improved the completeness of records, and remaining challenges that federal agencies can help mitigate; and (3) private companies conduct criminal record checks, the benefits those checks provide to employers, and any related challenges. GAO analyzed laws and regulations used to conduct criminal record checks and assessed the completeness of records; conducted a nationwide survey, which generated responses from 47 states and the District of Columbia; and interviewed officials that manage checks from the FBI and 4 states (California, Florida, Idaho, and Washington). GAO selected states based on geographic location and other factors. Most states that responded to GAO's nationwide survey reported conducting Federal Bureau of Investigation (FBI) criminal history record checks for individuals working with vulnerable populations—such as children and the elderly—and other employment sectors that GAO reviewed (see fig. below). States that did not conduct FBI record checks said this was because the state lacked a designated agency to review check results, among other challenges. In 2006, the Attorney General proposed that nongovernmental entities also serve in this role but noted that this would require considerations about securing data and protecting personal information. States have improved the completeness of criminal history records used for FBI checks—more records now contain both the arrest and final disposition (e.g., a conviction)—but there are still gaps. Twenty states reported that more than 75 percent of their arrest records had dispositions in 2012, up from 16 states in 2006. Incomplete records can delay checks and affect applicants seeking employment. The Department of Justice has helped states improve the completeness of records through grant funding and other resources, but challenges remain. For example, the FBI's Advisory Policy Board—which includes representatives from federal, state, and local criminal justice agencies—created a Disposition Task Force in 2009 to address issues regarding disposition reporting, among other things. The task force has taken actions to better measure the completeness of state records and identify state requirements for reporting disposition information. However, the task force does not have plans with time frames for completing remaining goals, such as examining and recommending improvements in national standards for collecting and reporting disposition information. According to stakeholders GAO contacted, the use of private companies to conduct criminal history record checks appears to be increasing because of employer demand and can provide benefits, such as faster response times. Federal agencies regulate these companies and have settled complaints, such as in cases where the wrong records were sent to employers. Private companies can face challenges in obtaining complete and accurate records, in part because not all states make their criminal record information accessible for private companies to search. GAO recommends, among other things, that the FBI establish plans with time frames for completing the Disposition Task Force's remaining goals. The Department of Justice concurred with all of GAO's recommendations.
The Commission on Civil Rights is a fact-finding federal agency required to report on civil rights issues. Established by the Civil Rights Act of 1957, the Commission is currently directed by eight part-time commissioners and employs approximately 70 staff members in fiscal year 2003. The Commission’s annual appropriation has averaged approximately $9 million since fiscal year 1995. The eight commissioners have a number of responsibilities, including investigating claims of voting rights violations and studying and disseminating information, often collected during specific projects, on the impact of federal civil rights laws and policies. Commissioners serve 6-year terms, and they are appointed on a staggered basis. Four commissioners are appointed by the President, two by the president pro tempore of the Senate, and two by the speaker of the House of Representatives. No more than four commissioners can be of the same political party. The Commission accomplishes its mission by (1) investigating charges of citizens being deprived of voting rights because of color, race, religion, sex, age, disability, or national origin; (2) collecting and studying information concerning legal developments on voting rights; (3) appraising federal laws and policies with respect to discrimination or denial of equal protection of the laws; (4) serving as a national clearinghouse for information; and (5) preparing public service announcements and advertising campaigns on civil rights issues. The Commission may hold hearings and, within specific guidelines, issue subpoenas to obtain certain records and have witnesses appear at hearings. The Commission must submit at least one report annually to the President and the Congress that monitors federal civil rights enforcement in the United States, and such other reports as deemed appropriate by the Commission, the President, or the Congress. For instance in 2002, the Commission issued a report that evaluated the civil rights activities of the Departments of Justice, Labor, and Transportation and another on election reform. The Commission is also authorized to investigate individual allegations of voting rights discrimination. However, because it lacks enforcement powers that would enable it to apply remedies in individual cases, the Commission refers specific complaints it receives to the appropriate federal, state, or local government agency for action. A staff director, who is appointed by the President with the concurrence of a majority of the commissioners, oversees the day-to-day operations of the Commission and manages the staff in its six regional offices and Washington, D.C., headquarters. The Commission also has 51 State Advisory Committees—1 for each state and the District of Columbia. Each committee is composed of citizens familiar with local and state civil rights issues. The members serve without compensation and assist the Commission with its fact-finding, investigative, and information dissemination functions. In 1997, we reported that the management of the Commission’s operations lacked control and coordination. Among other findings, we found that projects lacked sufficient documentation, project monitoring to detect budget delays or overruns was not systematic, and little coordination took place among offices within the Commission to approve and disseminate reports. Moreover, senior officials were unaware of how Commission funds were used and lacked control over key management functions, making the Commission’s resources vulnerable to misuse. We reported that key records had been lost, misplaced, or were nonexistent, leaving insufficient data to accurately portray Commission operations. Centralized agency spending data resulted in Commission officials being unable to provide costs for individual offices or functions. We also found in 1997 that the Commission had never requested any audits of its operations, and information regarding Commission audits in its fiscal year 1996 report on internal controls was misleading. The Commission also had not updated administrative guidance to reflect a major reorganization that occurred in 1986. We recommended that the Commission develop and document its policies and procedures to assign responsibility for management functions to the staff director and other Commission officials and provide mechanisms for holding them accountable for proper management of Commission operations. The FAR, established to codify uniform policies and procedures for acquisition by executive agencies, applies to acquisitions of supplies and services made by federal executive agencies—including the U.S. Commission on Civil Rights—with appropriated funds. The FAR contains procedures for awarding both competitive and sole-source contracts and selecting contracting officers. The FAR calls for federal agencies to promote competition to the maximum extent practicable when making purchases using simplified acquisition procedures.In 1994, Congress authorized the use of simplified acquisition procedures for acquisitions not exceeding $100,000. Under those procedures, agency officials may, among other things, select contractors using expedited evaluation and selection procedures and are permitted to keep documentation to a minimum. In 1996, Congress authorized a test program that permits federal agencies to use simplified acquisition procedures for commercial items not exceeding $5 million.The authority to issue solicitations under this test program is set to expire on January 1, 2004. When they award on a sole-source basis, contracting officers are required by regulations to prepare a written justification explaining the absence of competition. The regulations also generally require public notices of proposed sole-source awards. Further, contracting officers must determine that the price of a sole-source award is reasonable. This determination may be based on evidence such as (1) market research, (2) current price lists or catalogs, (3) a comparison with similar items in related industry, or (4) a comparison to an independent government cost estimate. Under the Federal Supply Schedule, the General Services Administration (GSA) awards contracts to several companies supplying comparable products and services. These contracts can then be used by any federal agency to purchase products and services. As a general rule, the Competition in Contracting Act of 1984 requires that orders under the Federal Supply Schedule result in the lowest overall cost alternative to meet the needs of the agency. The FAR and GSA procedures generally require agencies to compare schedule offerings of multiple vendors in arriving at an award decision. The Commission has established a set of project management procedures for commissioners and staff to follow when they plan, implement, and report the results of approved Commission projects. However, the procedures lack certain key elements of good project management that are reflected in federal internal control and budget preparation guidance. For example, commissioners do not generally receive updates about certain project cost information. Commissioners, in practice, make many planning decisions with little or no discussion of project costs, which can eventually contribute to problems such as delayed products and lower- quality products if too many projects are undertaken. Additionally, Commission procedures do not provide for systematic commissioner input throughout projects. In practice, commissioners do not always have the opportunity to review many of the reports and other products drafted by Commission staff before products are released to the public, which serves to significantly reduce the opportunity for commissioners to help shape a report’s findings, recommendations, and policy implications of civil rights issues. The Commission has made a number of improvements in project management since our 1997 review. For example, the Commission has revised and established policies that clarify the roles of the staff director and senior Commission staff such as the assistant staff director of the Office of Civil Rights Evaluation (OCRE) and the general counsel in the Office of the General Counsel (OGC), both of whom report directly to the staff director. These three key Commission officials are responsible for carrying out the policies established by the eight commissioners and for directly overseeing and managing virtually all headquarters projects that result in Commission products. See figure 1 for an abbreviated organization chart that shows the reporting relationship between commissioners, the staff director, and senior Commission staff. Commissioners (8) In addition to clarified roles of the staff director and senior Commission staff, the chief of the Budget and Finance Division now regularly provides the staff director with spending data by office and function. This detailed information enables the staff director to track the status of the Commission’s expenditures by organizational component at headquarters and field offices. Senior Commission staff and the project team leaders we interviewed were also using various project management procedures to meet target deadlines. For example, the assistant staff director, OCRE, and the deputy general counsel, OGC, were using a combination of techniques to ensure that project deadlines were met. These techniques included weekly meetings with staff, weekly or monthly reports from staff, and computer- generated schedules to monitor large, complex projects and smaller projects. Moreover, all project team leaders were routinely monitoring their assigned projects to ensure that projects stayed on schedule. Our review determined that the Commission’s project management procedures allow commissioners, the staff director, senior Commission staff, and project team leaders to manage long-range projects that take a year or longer to complete as well as time-critical projects that take several months or weeks to complete. The Commission chairperson, who was also chairperson in 1997, is of the opinion that Commission projects and products in fiscal year 2002 and later were generally timelier than those products discussed in our 1997 report and testimony. Table 1 summarizes the number of Commission products issued during fiscal year 2002 by Commission office and by type of product. Appendix I provides details about project names and product titles produced during fiscal year 2002 by those offices that generate headquarters Commission products that result from commissioner-approved projects: the Office of Civil Rights Evaluation, the Office of General Counsel, and the Office of the Staff Director (OSD). In addition, some fiscal year 2002 projects will generate products in future years. Appendix II lists the number of products, by type of product, issued or expected to be issued after fiscal year 2002 from projects that were ongoing during fiscal year 2002. Commission procedures do not provide for commissioners and senior Commission staff to systematically receive project cost information— primarily staff time charges—to help commissioners and senior staff plan and monitor projects. Commissioners continue to approve the majority of projects and products each year without having any specific information on how much the project will cost, or how much similar projects have cost in past years. Both federal government guidance and private sector project management specialists emphasize the importance of top-level reviews of actual performance. Feedback about actual project performance, including costs, is basic information essential for sound planning and allocation of scarce staff and other dollar resources. Without specific estimates of how much staff time will be spent and how much the project and its products will cost, Commission planning will continue to be conducted without key information. Commissioner approval of projects without key cost information may contribute to problems such as delayed products and lower-quality products if too many projects are undertaken for staff to carry out without additional resources. The Commission has taken action to limit the number of major projects that it will approve during the Commission’s annual long-range planning meeting at which commissioners decide which projects to undertake. However, commissioners continue to approve new projects throughout the year without any detailed feedback from the staff director about the amount of time that staff is already committed to spend to complete previously approved projects. Unless they periodically receive a comprehensive picture of how much current projects have cost to date and how much staff time has already been committed, commissioners will continue to make decisions about how many and which future projects to undertake, or which current projects and costs to adjust, without basic information necessary for sound project planning. Without downplaying the value of cost information in project management, commissioners have been divided over how much project cost information they need. During our review, several commissioners expressed concern, both to us and publicly at monthly Commission meetings, that commissioners were not receiving sufficient information about project costs. However, several other commissioners said that they received a sufficient amount of information about the status of projects. In March 2003, the commissioners did not pass a motion—the vote was tied 4-4—for the staff director to provide them with, among other things, quarterly information about project costs that commissioners were not receiving at that time. However, the commissioners reached a compromise and passed a subsequent motion in April 2003 to receive that quarterly cost information. Specifically, the motion requires commissioners to receive information quarterly on cost by project and by office. A category of information that was in the original motion that was not included in the motion that passed includes projects’ travel costs. Good project management principles dictate that cost information be integrated in a timely manner into project management. As applied to the Commission, cost information may be most useful if it is provided on a monthly basis. During its monthly meetings, the commission discusses whether or not to undertake emerging civil rights issues. These decisions will be better informed if, for example, data on costs that are already being experienced—or expected on other projects—be included in the monthly discussions. As of September 2003, commissioners had not begun to receive the agreed upon information. Once the commissioners begin to receive the cost information, it will be important to assess the extent that the information is meeting their collective needs and responsibilities. Although the Commission has guidance on project management procedures, we found that commissioners have limited involvement in the management of commission projects once they have been approved. This condition serves to significantly reduce the commissioners’ ability to lend their expertise to the development of Commission products that address civil rights issues. On a positive note, the Commission has a set of written instructions that outline the procedures that should be followed to manage its projects.The instructions describe the general steps that should be taken in the planning, implementation, and product preparation stages of projects undertaken by the commission. For example, the instructions address steps for planning projects at the front-end as well as legal review prior to the publication of reports. Nevertheless, the general nature of the written project management guidance limits the involvement of commissioners in project management. Specifically, the guidance does not specify the role that commissioners play in the implementation and report preparation phases, nor does it discuss the timing that commissioners should be involved throughout the process. It is especially important to have clear guidance on commissioner involvement because commissioners serve on a part-time basis and are not headquartered in a central building. Clear guidance on the nature and timing of commissioner involvement can help commissioners prepare themselves to make substantive contributions to implement a project and sharpen its conclusions and policy recommendations. In addition, clear guidance can help commissioners balance their commission duties with other professional duties and travel commitments. While the guidance addresses the role of commissioners in the last stage of the product preparation phase—final revision and approval prior to official release—this guidance only covers 2 of the 15 types of products produced by the Commission: statutory reports and clearinghouse reports. In fiscal year 2002, 3 of the Commission’s 32 products were either a statutory or a clearinghouse report. Put another way, the guidance does not dictate that commissioners give final review and approval for 29 of the 32 products worked on in fiscal year 2002. The 13 product types not covered by the guidance include, for example, briefings, briefing papers, executive summaries, staff reports, and State Advisory Committee reports. However, these reports address civil rights issues and as such, they could benefit from review by commissioners, as appropriate, as they are being developed. Further evidence pointing to a lack of commissioner involvement in project management is the very general nature of the monthly staff reports—the main management tool currently used to keep commissioners informed about the progress of projects. The monthly staff report is prepared by the staff director and sent to commissioners in preparation for the monthly Commission meetings. The report highlights the status of selected on-going projects (the report may contain a summary of any of the 15 product types). The staff director has the discretion to select the projects to include in the monthly report. We reviewed the 11 monthly reports that the staff director sent to the commissioners during fiscal year 2002 in preparation for the monthly Commission meetings and found that information in those reports about the two-volume statutory report (and other projects and reports) to be issued during the year was limited to general descriptions of project status. For example, regarding the Commission’s statutory report, commissioners were informed via the staff director’s monthly reports that “progress on the project has slowed” or “staff is working on an initial draft of the report” or “staff has nearly completed a draft of the report.” These updates did not contain information about the project’s costs or staff day usage to date, nor potential findings or conclusions. Likewise, during the 4-month period that the one clearinghouse project and report were being developed, only one monthly report even mentioned that project, and none of the four monthly staff reports made reference to the anticipated product or the anticipated date of report issuance. During our review, several commissioners told us that they are often unaware of the status and the content of many of the written products that result from approved projects until they are published or released by the Commission to the public. Moreover, some commissioners expressed dissatisfaction with the level of detail on project status contained in the monthly report. Some commissioners are increasingly concerned about their lack of opportunity to review reports and other products drafted by Commission staff before they are released to the public. These commissioners believe that a lack of periodic commissioner input and review undermines the opportunity for commissioners to help shape a report’s findings, recommendations, and policy implications of civil rights issues. In June and July 2003, several commissioners expressed their displeasure publicly about this lack of involvement by voting against, or abstaining from, acceptance of Commission draft products, in part because the commissioners had not had the opportunity to provide input to those projects or products. Other commissioners voted to accept the draft reports without commenting on their opportunity, or lack thereof, to provide input. The Commission on Civil Rights lacks sufficient management controls over its contracting procedures. In fiscal year 2002, the Commission did not follow proper procedures in awarding most of its 11 contracts. For example, the Commission’s largest dollar contract—currently $156,000—is for media services and has been ongoing for over 3 years with the same vendor. According to Commission officials, key documentation on how the contract was initially awarded was missing from contract files. Moreover, Commission officials did not follow the legal requirements to obtain competition for subsequent media services contracts. As a result, the Commission did not have all of the information it should have had to determine if the contract pricing was fair and reasonable. The Commission also has inadequate controls over the administration of its contracts. For example, information on specific tasks to be performed by vendors is communicated orally, not in a performance based statement of work as required by regulation. As a result, it is difficult for the Commission to track vendors’ performance against an objective measure and ensure that public funds are used in an effective manner. The Commission did not follow federal contracting regulations for any contracts initiated in fiscal year 2002 that were over $2,500. All but 4 of its 11 contracts were at or over this amount. When a government agency purchases services, the contracting officer must follow certain procedures, though these procedures vary slightly depending on the contracting method. Using simplified acquisition procedures, the contracting officer may select contractors using expedited evaluation and selection procedures and is permitted to keep documentation to a minimum. The agency still must, for contracts over $2,500, seek competition to the maximum practical extent. If circumstances prevent competition, agencies may award “sole-source” contracts, but are required to justify them in writing. A government agency may also issue orders against contracts that GSA awards to multiple companies supplying comparable products and services under its Federal Supply Schedule. The FAR and GSA procedures require agencies to consider comparable products and services of multiple vendors prior to issuing an order over $2,500. For service orders, the agency must send a request for quotes (RFQ) to at least three Federal Supply Schedule contractors based on an initial evaluation of catalogs and price lists. The agency must evaluate the quotes based on factors identified in the RFQ. GSA’s ordering procedures also state that the office ordering the services is responsible for considering the level of effort and mix of labor proposed to perform specific tasks and for making a determination that the total price is fair and reasonable. In fiscal year 2002, seven of the commission’s contracts were for amounts over $2,500, and the Commission did not follow proper procedures for any of them. For example, in fiscal year 2002, the Commission ordered its media services from a contractor listed on the Federal Supply Schedule. Instead of requesting quotes from other Schedule vendors, as required by GSA’s special ordering procedures, the Commission merely selected the same contractor to which it had made improper awards in previous years using simplified acquisition procedures. A factor that likely caused the Commission to not follow proper contracting procedures is that the Commission does not have personnel who are sufficiently qualified to conduct several of the required actions. The Commission has only two officials authorized to enter into contracts: the Acting Chief of the Administrative Services and Clearinghouse Division and the staff director. However, both officials are operating with limited awareness of proper federal contracting procedures. By not following proper procedures, the Commission did not obtain the benefits of competition and did not meet federal standards of conducting business fairly and openly. For example, by not competing its media services contract, and by using an incremental approach to obtaining media services, the Commission did not make clear the fact that it would have a recurring need for media services. Initially, in April 2000, the media services contract was offered with a 90-day/$25,000 maximum. A series of 90-day, 60-day, and even 30-day contracts followed, none of which were competed. The Commission’s relationship with this media services vendor has evolved into what is now an annual award with a maximum value of $156,000. The staff director could not document for us whether the agency competed its media services contract initially in 2000, and told us that it did not compete subsequent awards, including the last 2 years using the Schedule. In effect, the Commission denied itself the opportunity to choose from a potential pool of bidders because other vendors were likely unaware of the contract, the contract’s potential value or both. The Commission lacks sufficient internal control over the administration of its contracts. Examples of internal control activities include maintaining clear and prompt documentation on all transactions and other significant events; evaluating contractor performance; and segregating key duties and responsibilities among different people to reduce the risk of error or fraud. However, these elements of good organizational management are not evident in the Commission’s administration of its contract activities. For example, the Commission has not met federal requirements to establish and maintain proper contract files and to report contract actions to the Federal Procurement Data Center (FPDC), just a few of the numerous contract administration functions listed in the FAR. As a result, the Commission is not promoting the transparency necessary to keep the Congress and others informed about the Commission’s contracting activities. According to federal regulations, an agency must establish and maintain for a period of 5 years a computer file containing unclassified records of all procurements exceeding $25,000. Agencies must be able to access certain information from the computer file for each contract, such as the reason why a non-competitive procurement procedure was used, or the number of offers received in response to a solicitation. Agencies must transmit this information to the FPDC, the government’s central repository of statistical information on federal contracting that contains detailed information on contract actions over $25,000 and summary data on procurements of less than $25,000. The Commission has not followed federal regulations or established internal control standards with regard to reporting transactions. According to the Acting Chief of the Administrative Services and Clearinghouse Division, and to officials at the FPDC, the Commission has not met federal reporting requirements to the FPDC for at least the last 3 fiscal years. The Acting Chief said that a lack of resources is the reason for its noncompliance with this federal requirement. Moreover, the FPDC was unaware that the Commission, which historically had not entered into contracts over $25,000, now had contracts above that amount. FPDC officials told us that when they contacted the Commission, officials there told the FPDC that they were not able to submit the data because, for example, of problems with its firewalls. In addition, Commission officials did not accept FPDC’s offer to come to FPDC’s offices and key in the data. According to federal regulations, agency requirements for service contracts should be defined in a clear, concise performance-based statement of work that enables the agency to ensure a contractor’s work against measurable performance standards. Despite these regulations and principles of good management, the Commission has not established a system to monitor contractors’ performance, even for its contract that exceeds $100,000. The Commission has no records that document its decision-making on this contract. Lack of this basic, well-established management control makes the Commission vulnerable to resource losses due to waste or abuse. An integral component of good organizational management is a strong communication network between key decision-makers. To that end, it is vital that information on key transactions be communicated among the staff director, the commissioners and other key decision-makers. In addition, internal control standards dictate that key duties and responsibilities be divided or segregated among different people to reduce the risk of error or fraud. This includes the separation of the responsibilities for authorizing, processing, recording, and reviewing transactions, and handling any related assets. No one individual should control all key aspects of a transaction or event. Due to the nature of the Commission’s operating environment, the staff director does not provide information on procurements to the commissioners. According to the chairperson of the Commission, contracting is one of the duties that the Commission has delegated to the staff director. In fact, at public Commission meetings, when commissioners raised questions concerning contracting activities and sought information on contract cost and vendor performance, the chairperson asserted that contracting is not an area with which commissioners should be concerned. Moreover, a recent motion for commissioners to, among other things, be provided with cost and status information on contracts and other items failed to pass. Commissioners reached a compromise and passed a subsequent motion; however, it did not include the provision to receive information on contracts. Although the commissioners are charged with setting the policy direction of the agency, the Chairperson told us that the decision to contract out for a service is not a policy decision. She told us that the decision for the Commission to receive a certain service is a policy decision, but whether or not to perform that function in-house or contract out for it, is not. Since the contracting function is delegated to the staff director, it is her position that the commissioners need not know any details, unless there is an allegation of fraud, waste, or abuse on the staff director’s part. For the Commission’s largest contract, however, only the staff director has knowledge of what is being done, why it is being done and how it is being done. The Acting Chief of the Administrative Services and Clearinghouse Division is not involved because of the dollar limit on her contracting authority. Without greater transparency, the current operating environment has no mechanism to elevate concerns about contractual impropriety to the Commission. The Commission’s fiscal activities have not been independently audited in at least 12 years. As noted in our 1997 report, the Commission is not required by statute to have an Inspector General, which could independently and objectively perform financial audits within the agency. In addition, for the fiscal year 2002 audit cycle, the Commission received a waiver from the federal requirement that its financial statements be independently audited. The Commission submitted a request to have the requirement waived for both the fiscal year 2003 and 2004 audit cycles, citing a stable budget and high costs incurred through the agency’s conversion to a new accounting system. OMB granted the waiver for fiscal year 2003, but denied the request for the fiscal year 2004 cycle. In addition to this lack of independent financial oversight, the Commission’s current financial situation is not transparent within the agency. The majority of the agency’s budget-related information is centralized, with only the staff director and the chief of the Budget and Finance Division having a detailed knowledge of the Commission’s financial status. However, both the body of the commissioners, which heads the organization, and senior Commission officials, who are responsible for planning and carrying out Commission projects, only know what is reported to them by the staff director. On the basis of our interviews with commissioners and other Commission officials, we found that information on costs is limited. As a result of the centralized nature of the Commission’s financial operations, financial oversight is structured in a way that precludes appropriate checks and balances. Moreover, the Commission has in place a policy that discourages individual commissioners and their special assistants from making inquiries of any nature to Commission staff and to direct all inquiries to staff through the staff director. The policy dictates that commissioners not make direct contact with staff but work through the staff director to exchange information with staff and vice-versa. According to Commission documentation, this policy is meant to ensure that requests are carried out and to avoid confusion and difficult or embarrassing situations between staff and commissioners. One memo we saw even stated that violations of this policy could result in appropriate disciplinary action. Another stated that circumventing the staff director can only create confusion and disorder within the agency. According to some commissioners we spoke with, as well as senior Commission managers, this policy stifles communication and productivity within the agency and creates an environment of uneasiness. In addition, while some commissioners believe it is their fiscal duty to oversee the financial activities of the Commission and want complete financial information, others do not and cite their part-time status as the reason why they do not seek more information on financial activities. The commissioners who have the latter view believe that the fiscal responsibility of the agency lies with the staff director. In the absence of independent financial oversight, what is known about the Commission’s financial status suggests an austere financial picture. The staff director has characterized the Commission’s financial condition in public meetings as “challenging.” In fact, although the Commission’s budget has remained at essentially the same level for about the last 10 years, it has incurred several new costs associated with operations. For example, the Commission recently converted its accounting and payment processing system from the National Finance Center (NFC) to the Department of Treasury’s Bureau of the Public Debt at a cost to the Commission of almost $300,000. In addition, Commission officials cited an increase of more than $130,000 in rent for the Commission’s headquarters and field offices over the past year. Moreover, the Commission’s financial condition has affected its operations. For example, the Commission ordered a moratorium, citing funding limitations, on all previously authorized and new travel by the agency’s regional staff or State Advisory Committee members between late March 2003 and the end of July 2003. In addition, the Commission’s financial status has left it unable to reduce its high staff vacancy rate, which now stands at 20 percent. While the Commission has taken steps in recent years to improve its operations, it nevertheless continues to operate in a manner not fully consistent with sound management principles. These principles dictate that key decision makers receive timely information on project cost and have a vehicle throughout the project process to communicate their ideas and expertise. We recognize that commissioners should soon be receiving more information on project costs than had been previously received. While it remains to be decided whether the amount and timing of this information will meet the Commission’s needs, the challenge now facing commissioners is to partner toward the strategic use of cost information. In addition, the current level of commissioner involvement in the reporting phase of Commission products does not ensure that products are reflecting the full and wide-ranging expertise of the commissioners and as such, the potential impact of Commission products can be limited. This outcome can undermine the important mission of the Commission—to help inform and guide the nation on civil rights issues The Commission’s procurement of services is not being conducted in accordance with established internal control standards or federal regulations. We have long held that an agency’s internal control activities are an integral part of its planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Without the proper internal controls, there is little public assurance that funds are being spent in a proper and effective manner. As a result of the Commission’s weak contract management operations, the Commission does not have all of the information it should have to determine that the contracts it is entering into are reasonable and offer the best value to the government. Although the dollar amount involved in its contracting activities represents a small percentage of its overall appropriation, such expenditures are growing. But regardless of the amount spent on contracting, there is a need for the Commission to take steps now to ensure that current and future contract actions are performed in compliance with established regulations. If the Commission does not adhere to these regulations, then transparency cannot be established and no assurance can be given to the public that the Commission’s activities are leading to the proper and efficient use of public funds. The Commission has not had an independent audit of its financial statements in recent years. The requirement for the Commission to prepare and submit an audited financial statement, included in the Accountability of Tax Dollars Act of 2002, is an important step to strengthening its financial and performance reporting. However, these benefits have yet to be realized. Given the Commission’s limited financial management controls and current budget situation, the lack of external oversightparticularly in terms of financial audits—may make the Commission vulnerable to resource losses due to waste, mismanagement or abuse. Although funding an independent audit could represent a significant new cost to the Commission, these audits are essential to the sound stewardship of federal funds. Our longstanding position has been that the preparation and audit of financial statements increase accountability and transparency and are important tools in the development of reliable, timely, and useful financial information for day- to-day management and oversight. Preparing audited financial statements also leads to improvements in internal control and financial management systems. To further the Commission’s efforts to better plan and monitor project activities, we recommend that the Commission monitor the adequacy and timeliness of project cost information that the staff director will soon be providing to commissioners and make the necessary adjustments, which could include providing information on a monthly rather than quarterly basis, as necessary; and adopt procedures that provide for increased commissioner involvement in project implementation and report preparation. These procedures could include giving commissioners a periodic status report and interim review of the entire range of Commission draft products so that, where appropriate, commissioners may help fashion, refine, and provide input to products prior to their release to the public. To ensure proper contracting activities at the Commission, we recommend that the Commission establish greater controls over its contracting activities in order to be in compliance with the Federal Acquisition Regulation. These controls could include putting in place properly qualified personnel to oversee contracting activities, properly collecting and analyzing information about capabilities within the market to satisfy the Commission’s needs, and properly administering activities undertaken by a contractor during the time from contract award to contract closeout. While the Commission has received waivers from preparing and submitting audited financial statements for fiscal years 2002 and 2003, we recommend that the Commission take steps immediately in order to meet the financial statement preparation and audit requirements of the Accountability of Tax Dollars Act of 2002 for fiscal year 2004. These steps toward audited fiscal year 2004 financial statements could include, for example, (1) identifying the skills and resources that the Commission needs to prepare its financial statements in accordance with generally accepted accounting principles and comparing these needs to the skills and resources that the Commission presently has available; (2) preparing such financial statements, or at least the balance sheet with related note disclosures, for fiscal year 2003; and (3) ensuring that evidence is available to support the information in those financial statements. The U.S. Commission on Civil Rights provided us with two sets of comments on a draft of this report. We received comments from four commissioners and from the Commission’s Office of the Staff Director. Commissioners Kirsanow, Redenbaugh, Thernstrom, and Braceras concurred with our conclusions and recommendations on the management practices at the Commission. Their comments are reproduced in their entirety in appendix III. We did not receive comments from the remaining four commissioners, who include both the chairperson and the vice-chair of the Commission. In comments from the Office of the Staff Director, the staff director pointed out that the Commission is committed to ensuring that its operations are well maintained and will consider implementing whatever recommendations and suggestions that appear in the final report. However, the staff director believed that many of the findings were inaccurate and that aspects of the draft report contained errors, unsubstantiated allegations, and misinterpretations. For example, the staff director disagreed with our finding that the Commission lacks sufficient management controls over its contracting procedures and concluded the Commission’s overall fundamental contract practices are sound. Similarly, he disagreed with our findings concerning weaknesses in project and financial oversight. After carefully reviewing his concerns, we continue to believe that our conclusions and recommendations are well founded. The staff director’s detailed comments and our responses to them are contained in appendix IV. Finally, the staff director also provided a number of technical comments and clarifications, which we incorporated, as appropriate. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will provide copies of this report to interested congressional committees. We are also sending copies to the commissioners and the staff director, U.S. Commission on Civil Rights. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 or Brett Fallavollita on (202) 512-8507 if you or your staff have any questions about this report. Other contact and staff acknowledgments are listed in appendix V. During our review of the U.S. Commission on Civil Rights’ activities, we focused on the management of individual projects, as we had done during our 1997 review and examined them in the context of broader management issues at the Commission. For example, to analyze the Commission’s expenditures on projects since 1997 in the context of both the project spending discussed in our 1997 report as well as in comparison with the Commission’s most recent budget request, we reviewed the Commission’s annual Request for Appropriation for fiscal years 1999 through 2004, which provided data on how the Commission actually spent its appropriations for fiscal years 1997 through 2002. We noted that the Commission’s fiscal year 2004 Request for Appropriation requests a significant increase in funding, from $9 million in fiscal year 2002 to $15 million in fiscal year 2004. Consequently, we not only focused on how well the Commission currently manages its projects, but also considered the implications of potentially significant increases in project and product spending and the human resources need to properly manage such increases. We used a combination of Office of Management and Budget (OMB), private sector, and our own guidance as criteria to identify key elements of good project management. These criteria included U.S. General Accounting Office, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: Nov. 1999); Preparation and Submission of Budget Estimates (2002) (OMB Circular No. A-11, Part 2); Project Management Scalable Methodology Guide ( 1997, James R. Chapman); A Guide to the Project Management Body of Knowledge (PMBOK Guide)—2000 Edition (The Project Management Institute, Sept. 2003); and Project Management—Conventional Project Management (Northern Institute of Technology, Hamburg, Mar. 2002). Our standards for internal control list top-level review of actual performance (e.g., commissioner review of actual project cost) as a key control activity. OMB Circular No. A-11 emphasizes the importance of managing financial assets. To supplement the general guidance on good project management principles described in OMB’s and our guidance to agencies, we identified several private sector principles, practices, and techniques for good project management at the individual project level. For example, the Project Management Scalable Methodology Guide ( 1997, James R. Chapman) and the Project Management Institute’s A Guide to the Project Management Body of Knowledge (PMBOK Guide)—2000 Edition identify project management principles for small, straightforward projects as well as a best practices approach for large, complex projects. According to these principles, regardless of project size or degree of risk, sound project cost management calls for comparisons between project plans and actual project performance—even for projects with minor levels of investment and low risk. We reviewed the most recent complete fiscal year’s project activities at the time of our review (fiscal year 2002) and identified 22 projects and 43 products (briefings, executive summaries, internal memorandums, reports, etc.) that resulted from those projects. Of the 43 total products that resulted from these projects as of July 2003, we included in our review the 32 issued during fiscal year 2002. We excluded 3 products issued during fiscal year 2001 and 8 products issued or expected to be issued during fiscal years 2003 or 2004. Table 2 provides details about project names and product titles produced during fiscal year 2002 by those offices that generate headquarters Commission products that result from commissioner-approved projects: the Office of Civil Rights Evaluation (OCRE), the Office of General Counsel (OGC), and the Office of the Staff Director (OSD). The OSD product resulted from a project initiated by the staff director rather than from the commissioners. Table 2 also includes a State Advisory Committee report from Alaska because OCRE staff assisted in preparing the report. The table excludes an Arizona State Advisory Committee briefing and State Advisory Committee reports from Iowa and Pennsylvania in 2002 because OCRE staff were not involved in preparing the briefing or those reports. Some fiscal year 2002 projects will generate products in future years. (See app. II.) This appendix lists the number of products, by type of product, issued or expected to be issued after fiscal year 2002 from projects that were ongoing during fiscal year 2002. (See app. I.) 1. Our draft report clearly indicates that we found deficiencies in the project management practices at the Commission. We focused largely on the role of the Commissioners because they comprise the Commission which, under the applicable statute, has ultimate responsibility in providing reports to Congress and the President, and carrying out other statutory responsibilities. 2. We do not concur with the staff director’s comment that the Commission has rejected the desirability of Commissioners shaping the findings and recommendations of Commission projects. Commission staff play an important role in running projects and helping produce reports, but their involvement does not diminish the important role that commissioners can and should play in shaping reports on civil rights issues. 3. We disagree that our draft failed to acknowledge the Commissioners’ role in helping scope projects. The draft indicates that Commissioners have some involvement, albeit limited, in the planning process. Our basic point remains: procedures do not provide for systematic commissioner input throughout projects and in practice, commissioners do not always have the opportunity to review many of the reports and other products drafted by the staff before they are released to the public. 4. We believe that the draft report accurately portrays the amount of information provided to commissioners and project managers about ongoing projects. We based our assessment on the (limited) information that has been provided to commissioners and project managers in the recent past. Project managers told us that, during fiscal years 2002 and 2003 (as of August), they were not regularly receiving project cost data and staff hour information. Additionally, the draft recognized that arrangements have recently been made to provide additional information to commissioners. As we noted in a draft recommendation, the efficacy of this action will need to be monitored. For example, the staff director’s first project cost report on September 30, 2003, in response to the commissioners’ April 2003 vote for quarterly cost information, was incomplete because it did not contain cost information for at least two projects that had been regularly reported in monthly staff director reports during fiscal year 2003. 5. In our discussions with Commission officials subsequent to the December 18, 2002, letter, we discussed in further detail the scope of our review. We indicated that our review would primarily focus on current management operations and not entail a specific point-by-point assessment of the Commission’s implementation of our past recommendations. Nevertheless, during our review, we learned that the Commission had made a number of improvements since our 1997 review. Our draft report discusses these improvements. However, our review was not intended to evaluate either the improvement in timeliness or the quality of Commission products since our 1997 review. Notably, Commissioners Kirsanow, Redenbaugh, and Thernstrom expressed concern in their written response to our report that although we did not include an assessment of the quality of Commission products, they found that “reports lack the substantive and methodological rigor worthy of the Commission’s history and seal.” The staff director may wish to pursue the commissioners’ comments in further detail. 6. As noted above, our report includes this recent development. 7. The staff director believes that our sentence in the draft stating that the report contains recommendations for improving Commission operations should be deleted or at least modified to reflect that recommendations are directed at commissioners and not staff offices. We do not believe that a change is warranted. The implementation of our recommendations will clearly involve the commissioners, the staff director, and officials throughout the agency. 8. The Commission’s responsibilities are described in the applicable statute. See 42 U.S.C. 1975a. We have qualified our description of the responsibilities we list in our report. 9. Our draft report noted that improvements in certain project management procedures have been made. 10. We believe that the staff director’s comment that project milestone dates are routinely provided to commissioners in monthly reports from the staff director is an overstatement. Our draft report noted that, during fiscal year 2002, the staff director’s monthly reports to the commissioners in preparation for their monthly meetings did not contain a comprehensive list of project milestone dates for all ongoing projects. Furthermore, fiscal year 2003 staff director reports to the commissioners generally did not list all ongoing projects and did not include estimated product issuance dates or project completion dates for most projects. This information was maintained and routinely updated when warranted by OCRE and OGC project managers for project planning, management and monitoring purposes but was not reported in the staff director’s monthly reports to the commissioners. 11. As we note in comment 5, our review was not intended to evaluate the quality of Commission products. 12. We shared a draft of tables 1 and 2 with the staff director and other senior staff before we sent the draft report to the Commission. The officials indicated that the tables were generally accurate. Nevertheless, we made technical corrections, as appropriate, in areas clarified by the Commission. 13. The purpose of the table in which the footnote in question appears is to provide details about the projects produced by those offices that generate headquarters products. The footnote intends to inform the reader about an OGC internal product not contained in the body of the table. The footnote is not intended to convey collateral duties. Therefore, we did not add the information suggested by the staff director. We note, however, the draft report contained a background paragraph which lists the activities carried out by the Commission to accomplish its mission, including the investigation of charges of citizens being deprived of voting rights because of color, race, religion, sex, age, disability, or national origin. 14. The products that the staff director refers to were accurately described in our draft report as expected to be issued after fiscal year 2002, as he acknowledges in his description of expectations regarding each product. 15. We continue to believe that our findings on the extent of financial oversight at the Commission are factually correct. Moreover, the recommendations we made in the draft report were based on the deficiencies we found in the Commission’s management practices. 16. We do not agree that the draft report implied that a flow of financial information from the staff director to the commissioners is inappropriate. In fact, the concern the draft highlights is that information is centralized around the staff director, creating a situation that precludes appropriate checks and balances. 17. We believe that the Commission’s internal communication policy was an appropriate aspect of Commission operations for us to review. As noted in our draft report, some commissioners, as well as senior Commission managers, told us they believe that the current policy stifles communication and productivity within the agency and creates an environment of uneasiness. Moreover, the Commission’s policy limiting direct commissioner and staff interaction is not consistent with sound management principles of highly effective organizations. Finally, we do not believe the longevity of a policy justifies its existence when the need for change becomes apparent. 18. While it is true that the Commission has several large dollar agreements with other agencies, these agreements are not contracts awarded pursuant to the FAR, and our review did not extend to them. Our review was limited to an examination of how well the Commission used its contracting authority for purchases above the micro-purchase threshold. Our review focused on the extent to which the Commission complied with regulatory requirements applicable to these procurements. 19. When we requested a list of all contracts for which the Commission budgeted or paid funds against in fiscal year 2002, the Commission provided us with a list of 11 contracts and orders awarded by the Commission. The staff director correctly points out that we requested and received information on a 12th contract that was entered into in fiscal year 2003. This contract was specifically brought to our attention by our requester, but fell outside the timeframe we included in our scope. The draft has been corrected to show 11 contracts reported by the Commission as ongoing in fiscal year 2002. The change in the number of contracts we are reporting on did not affect in any manner our findings or conclusions. 20. Our draft report has been revised to report 11 as the number of contracts that the Commission listed to us that it entered into in fiscal year 2002. The Commission noted in a letter accompanying the list, however, that its list of contracts did not include the Commission’s day-to-day administrative contracts, such as those for court reporters, temporary support services, and meeting room rentals. In discussions with the staff director and the acting chief, Administrative Services and Clearinghouse Division, we were told, as the staff director restates here, that these administrative contracts were modest and done through small scale purchase orders below the micro-purchase threshold. We noted in our draft report that we did not include these contracts in our review. 21. We disagree with the staff director’s conclusion, and the logic used to reach that conclusion, that the Commission’s contracting practices are currently sound. We recognize that the Commission has undertaken many other contracting actions. We did not include these in our analysis because of the reasons stated in comments 18 and 20. Our review of the 11 contracts provided to us reveals that the Commission did not follow proper procedures for the majority of these contracts, that is, all 7 above the micro-purchase threshold. 22. We refer the staff director to the list of 11 contracts provided to us earlier in our review, 7 of which were of amounts exceeding the micro- purchase threshold. The Commission, in addition to lacking documentation on whether some contracts were competed, could not provide documentation to support that publicity requirements were met for other purchases, nor in the absence of such documentation, written justifications from contract files that would explain why those requirements were not met. 23. The staff director acknowledges that the Commission could improve its recordkeeping and documentation procedures in terms of contract maintenance. He indicates that we erroneously state that the Commission did not compete its media services contract. In fact, our report states that the Commission could not document that it competed the initial media services contract. Without such documentation, we cannot ascertain whether or not this or certain other contracts at the Commission were, in fact, competed. We believe documentation deficiencies constitute a material breach of proper contracting activities. 24. The staff director’s comments support our finding that documentation deficiencies were found across the contracts we reviewed. To the extent that an unfamiliarity with specific requirements contributed to the deficiencies, our draft recommendation for greater controls, including the need for qualified personnel to oversee contracting activities, becomes underscored. 25. We continue to believe that the Commission did not follow proper procedures in awarding any of its contracts over the micro-purchase threshold, and that this condition limited the Commission’s ability to obtain the benefits of competition. Concerning the 2 contracts specifically mentioned in the staff director’s comments, we found that the Commission did in fact send out requests for quotations; however, it could not document that it had met other regulatory requirements, such as the requirements for publicizing proposed contract actions that serve to ensure that the vendor community is made aware of an agency’s need for services. By not doing so, the Commission limited the potential pool of bidders because other vendors were likely unaware of the contract and therefore did not have the opportunity to submit bids. 26. We continue to believe that the manner in which the Commission obtained media services from the Federal Supply Schedule was not consistent with GSA’s established ordering procedures. While it is true that the GSA has clarified its regulation language to make clear its intent that soliciting from three vendors is mandatory, the staff director in his comments ignores the requirements in those earlier regulations to prepare an RFQ, transmit the request to contractors, and evaluate the responses before selecting the contractor to receive the order. We maintain that even the earlier version of GSA’s regulation was sufficiently clear in its requirement to solicit quotes from more than one vendor. 27. For the reasons cited in comments 28 and 30, we do not agree that we imposed subjective and arbitrary criteria when assessing the soundness of the Commission’s contracting activities. 28. While the Commission’s concern for small, traditionally disadvantaged and women-owned businesses is laudable, it does not provide a license for circumventing established contracting regulations and procedures to achieve these ends. We are aware of the Small Business Administration’s 8(a) program. Having elected not to pursue the 8(a) program, however, it was incumbent upon the Commission to adhere to procedures governing its choice of procurement vehicles. The regulations do not state nor imply that agencies promoting small disadvantaged or women-owned businesses in government procurement may dispense with the other requirements, such as the requirement to solicit multiple bids. Moreover, we note that OMB Circular A-76 does not encourage contracting out but merely establishes procedures for public-private competition. 29. We disagree. The Commission’s relationship with its media services vendor has evolved into a de facto annual award. In addition, for fiscal year 2003, the contract had a maximum value of $156,000. We did not request records from the Commission in attempt to tally a fiscal year 2003 total of funds actually spent. We did, however, tally a fiscal year 2002 total of funds spent on the media services contract and found that $131,225 was spent on a “not-to-exceed” limit of $140,000. We have added a footnote in the report section to clarify this point. 30. We disagree with the staff director’s belief that our findings are subjective and erroneous. We continue to believe that it is important to provide written performance-based requirements documents and do not believe that simplified acquisition procedures preclude this need. 31. As our draft report stated, written performance-based requirements documents can help ensure contractors’ work against measurable standards. 32. For the 7 contracts we reviewed with amounts above the micro- purchase threshold, the Commission did not provide contractors in writing with specific task orders, instead providing oral information on tasks to be performed. For example, for its largest contract (media services), a broad statement of work with little detail was written to accompany the order. The staff director told us that he meets regularly with the contractor to discuss specific tasks under the order. As we state in comment 31, without written performance-based requirements documents, contractors’ work products cannot be successfully evaluated in a transparent manner. 33. The Commission does not maintain written information on specific work tasks communicated to the vendor, expected timeframes for specific tasks to be performed, or the definition or description of how tasks were to be performed. Rather, the work reports that the staff director refers to consisted of several press releases, meant to illustrate activities performed by its media services vendor and copies of vendor invoices that showed tasks such as, media outreach/story placement, faxing, planning and consultation, etc., for which the Commission was billed. We continue to believe that the Commission cannot effectively assess contractor performance based on the documentation we were provided. 34. The staff director recognizes that the Commission has experienced significant turnover with regard to its contracting personnel. Yet he disagrees with our characterization that the Commission’s current personnel are not sufficiently qualified in certain areas of contracting. The problems identified in this report should alert the Commission to the necessity of improving its contracting support or to look for outside assistance in this area. 35. To conduct our review, we relied upon the extensive legal and technical assistance available within our agency. When issues arose during our interviews that required either GAO or Commission officials to conduct additional analysis, then a follow-on discussion usually transpired. We stand behind the findings reported in the draft report. Dennis Gehley made significant contributions to this report, in all aspects of the work throughout the review. In addition, Caterina Pisciotta assisted in gathering and analyzing information and in writing a section of the report; Lori Rectanus was instrumental in developing our overall design and methodology; Corinna Nicolaou assisted in report and message development; Julian Klazkin and Robert Ackley provided legal support; and Ralph Dawn and H. Kent Bowden provided specialized assistance in the areas of contract and financial management.
Over the past 10 years, GAO, the Congress, the Office of Personnel Management (OPM), and others have raised numerous concerns about the U.S. Commission on Civil Rights. GAO was asked to assess (1) the adequacy of the Commission's project management procedures, (2) whether the Commission's controls over contracting services and managing contracts are sufficient, and (3) the extent of recent oversight of the Commission's financial activities. The Commission has established a set of project management procedures for commissioners and staff to follow when they plan, implement, and report the results of approved Commission projects. However, the procedures lack, among other things, a requirement for systematic commissioner input throughout projects. As a result, commissioners lack the opportunity to review many of the reports and other products drafted by Commission staff before products are released to the public, which serves to significantly reduce the opportunity for commissioners to help shape a report's findings, recommendations, and policy implications of civil rights issues. The Commission lacks sufficient management control over its contracting procedures. The Commission routinely did not follow proper procedures for its fiscal year 2002 contracting activities. For the Commission's largest dollar contract, key documentation on how the contract was initially awarded was missing from contract files. Moreover, Commission officials did not follow the legal requirements to obtain competition for its subsequent media services contracts. As a result, the Commission did not have all of the information it should have had to determine whether its contracts provided the best value to the government. Little, if any, external oversight of the Commission's financial activities has taken place in recent years. An independent accounting firm has not audited the Commission's financial statements for the last 12 years. Although the Accountability of Tax Dollars Act of 2002 requires the Commission--along with certain other executive agencies--to have its financial statements independently audited annually, the Commission has been granted a waiver by the Office of Management and Budget (OMB) from compliance with the financial statement preparation and audit requirements of the act for the fiscal years 2002 and 2003 audit cycles, which OMB was authorized to waive during an initial transition period of up to 2 years.
Traditionally, a drug is compounded, through the process of mixing, combining, or altering ingredients, to create a customized drug tailored to the medical needs of an individual patient upon receipt of a prescription. For example, a pharmacist may tailor a drug for a patient who is allergic to an ingredient in a manufactured drug or prepare a liquid formulation for a patient who has difficulty swallowing pills. Some pharmacies also compound drugs in advance of receiving individual patient prescriptions in anticipation of receiving prescriptions based on historical prescribing patterns, a practice referred to as anticipatory compounding. Compounded drugs include nonsterile preparations, such as capsules, ointments, creams, gels, and suppositories; and sterile preparations, including intravenously administered fluids and injectable drugs. Compounded sterile drugs pose special risks of contamination if not made properly and require special safeguards to prevent injury or death to patients receiving them. Drug compounding is an integral part of the pharmacy profession and is practiced in a variety of settings, including hospital pharmacies, community pharmacies, chain drug store pharmacies, and home infusion settings. The exact proportion of all prescriptions filled by compounded drugs is unknown. In 2003, we reported that estimates ranged from 1 percent to 10 percent. More recently, in 2013, the International Academy of Compounding Pharmacists estimated that the compounding industry made up 1 to 3 percent of the U.S. prescription drug market. The exact number of pharmacies that compound drugs is also unknown. In 2013, the International Academy of Compounding Pharmacists provided the following estimates: About 26,000 community-based pharmacies reported that they provide some sort of prescription compounding services, based on information from the National Council of Prescription Drug Program’s database on pharmacies. Of those 26,000 community-based pharmacies, about 7,500 pharmacies specialize in compounding. Of those 7,500 community-based pharmacies that specialize in compounding, about 3,000 pharmacies compound both sterile and nonsterile preparations. In addition, there are about 8,200 hospital pharmacies in the United States, and all of them are likely conducting some sort of compounding, both sterile and nonsterile. A recent report indicates that there has been an increase in the outsourcing of drug compounding in the last decade, primarily by hospitals. In April 2013, the HHS-OIG reported that nearly all (92 percent) of surveyed hospitals that participated in Medicare reported using compounded sterile products, and that more than three-fourths of these hospitals (77 percent) purchased some of these compounded drugs from at least one outside pharmacy. The HHS-OIG found factors that hospitals cited for outsourcing included the need to ensure a ready supply of products in the event of shortages and the need for products with extended shelf lives, which require sophisticated equipment and testing to prepare these products that may not be readily available on the hospital premises. State pharmacy regulatory bodies are responsible for oversight of the practice of pharmacy. All 50 states describe drug compounding in their state laws and regulations on pharmacy practice, although specific statutes or regulations vary across states, according to NABP. USP is involved in setting standards that affect compounding.to USP, compounding standards help practitioners adhere to widely acknowledged, scientifically sound procedures and best practices, and facilitate the delivery of consistent and good-quality prepared medicines to patients. Twenty-five state pharmacy regulatory bodies reported that According they require compliance with USP’s chapter on sterile compounding, according to the NABP’s 2013 survey of pharmacy law. FDA considers compounded drugs to be “new drugs” subject to FDA oversight; however, the agency has acknowledged that it is not practicable for pharmacies to complete and obtain approval for a new drug application for each compounded drug prepared for an individual patient. In 1992, FDA, through guidance, and, in 1997, Congress, through legislation, attempted to clarify when compounded drugs will be exempt from certain requirements of the FDCA, including new drug approval requirements. Specifically, the Food and Drug Administration Modernization Act of 1997 (FDAMA) enacted section 503A of the FDCA. This section exempted drug products compounded by a pharmacist or physician based on a valid prescription for a compounded product that is necessary for the identified patient from three key provisions of the FDCA that are otherwise applicable to drugs, provided the pharmacy had, among other conditions, not solicited prescriptions or advertised or promoted the compounded drugs. In 2001, however, the United States Court of Appeals for the Ninth Circuit struck down all of the advertising, promotion, and solicitation provisions of section 503A of the FDCA because those provisions violated the Free Speech Clause of the First Amendment. The court also held that, because these provisions could not be severed from the remainder of section 503A, all of section 503A was invalid. In 2002, the United States Supreme Court struck down the law’s advertising, promotion, and solicitation restrictions without addressing whether the rest of section 503A remained law. As a result, FDA issued a revised version of its compliance policy guide on drug compounding in 2002, which provides guidance, in light of the Ninth Circuit and Supreme Court decisions, on the types of factors the agency will consider in determining whether to take enforcement action against drug compounders for violations of the FDCA. These factors include activities, such as offering compounded drug products at wholesale, that suggest a drug compounder is engaged in drug manufacturing, rather than drug compounding. Subsequently, in 2005, the United States Court of Appeals for the Fifth Circuit issued a decision holding that, although section 503A’s advertising, promotion, and solicitation restrictions were invalid, these restrictions could be severed from the rest of section 503A and, therefore, the law’s remaining drug compounding provisions remain valid. See appendix II for details about these developments and how they have affected FDA’s authority to oversee drug compounding. The FDCA provides FDA authority to inspect pharmacies that compound drugs; however, this authority is limited. Generally, FDA’s inspection authority does not extend to a pharmacy’s records if the pharmacy meets certain requirements. While FDA has not routinely inspected compounding pharmacies, FDA has used its authority to conduct some inspections in recent years, generally in response to complaints. These inspections have resulted in FDA issuing inspection observation reports, which are called FDA form 483s, and, in some cases, warning letters. FDA’s FACTS database contains information on these inspections, including the type of inspection (e.g., routine or in response to a complaint). Under the FDCA, drug manufacturers are required to register with FDA and list the drugs they manufacture. The FDCA exempts from these registration and listing requirements those pharmacies that meet certain requirements. FDA’s Drug Registration and Listing System contains information on drug establishments that have registered with FDA to market their drugs in the United States. These establishments provide information, including company name and address, and identify the drugs they manufacture for commercial distribution in the United States. Although FDAMA attempted to clarify FDA’s authority to oversee drug compounding, subsequent court decisions have contributed to a lack of clarity regarding the legal standards FDA must apply to oversee drug compounding. Specifically, two federal circuit court decisions resulted in differing FDA authority over drug compounding in different parts of the country, which has affected FDA’s ability to oversee drug compounding. Section 503A provisions exempting certain compounded drugs from the FDCA’s good manufacturing practice, certain labeling, and new drug and abbreviated new drug application requirements are in effect in those states in the Fifth Circuit, in which the U.S. Court of Appeals has held that the law, other than its advertising, promotion, and solicitation provisions, is valid. However, FDA follows its 2002 compliance policy guide in states in the Ninth Circuit, in which the U.S. Court of Appeals has held all of the drug compounding provisions in section 503A are invalid. In states outside of the Fifth and Ninth Circuits, where federal courts have not considered the validity of these drug compounding provisions, FDA considers both section 503A’s drug compounding provisions and its 2002 compliance policy guide to guide its oversight. Figure 1 shows how FDA generally conducts its oversight of drug compounding in different parts of the country based on the differing court decisions. .. FDA lacks reliable information on entities that compound drugs, the types of drugs being compounded, and adverse events related to compounded drugs. Until 2013, FDA limited its inspections of compounding pharmacies to those conducted in response to complaints or adverse events, called “for cause” inspections; however, the agency has recently conducted inspections of compounding pharmacies that were known to produce “high-risk” sterile compounded drugs, and identified serious problems. FDA officials, including the FDA Commissioner, have stated that, under the FDCA, compounding pharmacies are generally not required to register with FDA or list their products, and therefore FDA does not know who they are and what they are compounding. As a result, FDA has stated that one of the reasons it has not routinely inspected compounding pharmacies is because the agency does not know who they are. Officials with some of the organizations we interviewed said there has been confusion regarding the extent to which FDA oversees the compounding pharmacies that registered with FDA as drug manufacturers. Although drug manufacturers are required to register with FDA by providing company information such as name, location, and the drugs the company manufactures, compounding pharmacies meeting the FDCA’s registration exemption are not required to register. However, according to FDA officials, neither the law nor the agency precludes those compounding pharmacies that are exempt from registration from voluntarily doing so, and some compounding pharmacies have registered with FDA as manufacturers and marketed themselves as “FDA- registered.” FDA officials told us that registering as a manufacturer does not necessarily result in the application of regulatory requirements that apply to manufacturers or in FDA inspection for compliance with these requirements. For example, a compounding pharmacy may voluntarily register with FDA; however, this registration does not by itself give FDA authority to require the pharmacy to comply with FDA’s good manufacturing practices and other requirements that apply to drug manufacturers. Nonetheless, these pharmacies appear as registered manufacturers in FDA’s registration database, the Drug Registration and Listing System. When entities that compound drugs on a large scale register with FDA as manufacturers and market themselves as “FDA-registered,” it may erroneously convey an endorsement by FDA. As a result, some state officials and purchasers may incorrectly assume FDA inspects the entities or has reviewed and approved their compounded drugs. Officials from one of the national pharmacy organizations told us that they recently learned that a pharmacy can be registered with FDA as a drug establishment as well as with the state as a pharmacy. They added that healthcare professionals and the public may assume that if an entity registers with FDA then that means that FDA is in some way regulating that entity. In addition, NABP officials noted that they were aware of some entities engaged in drug compounding whose drug compounding activities are not subject to state oversight because they are registered as manufacturers with FDA and the states assume FDA is overseeing these activities. Yet, if a compounding pharmacy is voluntarily registered with FDA, the agency would not inspect it for compliance with good manufacturing practices because it does not manufacture FDA-approved drugs. Further, FDA lacks reliable data to make decisions to prioritize its inspection workload and other follow-up and enforcement actions. Under standards for internal control in the federal government, relevant, reliable, and timely information should be available for external reporting purposes and management decision making. According to FDA officials, although the agency’s FACTS database has a code for inspections of compounding facilities, some compounding pharmacies could be inspected and coded as either manufacturers of human drugs or manufacturers of veterinary drugs, and the FACTS database would not identify them as inspections of compounding pharmacies. In addition, while FDA can manually look up the results of an individual inspection, the agency does not have ready access to all of the final classification of inspections for those compounding pharmacies it can identify in its FACTS database; in these instances, FACTS does not indicate the agency’s final determination whether an official action was indicated, voluntary action was indicated, or if no action was indicated from the inspection results. According to FDA officials, some of the final decisions are in hard copy, and the database includes recommendations from the district office inspectors, which may differ from the final inspection classifications. Without reliable, timely data on all inspections conducted and the actions required and taken following those inspections, FDA lacks ready access to key data to inform its decision making on its oversight priorities and to take appropriate action when problems are identified. Generally, if a manufacturer receives drug- or certain device-related adverse event reports, it must send them to FDA. Health care professionals and consumers can voluntarily file adverse event reports with FDA and may also report these events to the products’ manufacturers. User facilities (e.g., hospitals and nursing homes) must report certain device-related—but not drug-related—adverse events to FDA as well. 21 C.F.R. §§ 314.80(c), 803.30, 803.50. pharmacists’ and technicians’ miscalculations and mistakes in filling prescriptions. Until 2013, FDA limited its inspections of compounding pharmacies to those conducted in response to complaints or adverse events, called “for cause” inspections; however, the agency has recently conducted inspections of compounding pharmacies that FDA identified as known to produce “high-risk” sterile compounded drugs. From its available data, FDA identified 194 “for cause” inspections of compounding pharmacies the agency conducted from February 8, 2002, through May 11, 2012, under its pharmacy compounding assignment code for human drugs. Of these 194 inspections, FDA issued 63 form 483 inspection observation reports outlining significant objectionable conditions identified during the inspections. FDA subsequently issued at least 31 warning letters to pharmacies as a result of these inspections for problems such as bacterial and fungal contamination found in sterile clean rooms and in finished product samples, improper hygiene and garbing procedures (e.g., putting on gowns, gloves, and shoe covers), failure to conduct appropriate laboratory testing on drug products, and inadequate ventilation. However, FDA has not taken any enforcement actions against the 31 entities where the agency found problems significant enough to send warning letters, according to FDA officials. Further, we found that 19 of the 194 compounding pharmacies were registered with FDA as drug manufacturers. While FDA policy requires that the final inspection classification (which states whether official action, voluntary action, or no action was indicated based on the inspection findings) be entered into the agency’s FACTS database, FDA officials said they could not readily provide the final inspection classification for the 194 inspections of compounding pharmacies. The officials said that in some cases the database included FDA district officials’ recommendations for inspection classification rather than the final inspection classification. As a result, we could not ascertain how many of the 194 inspections of compounding pharmacies found problems that were significant enough for FDA to determine that official action was indicated. More recently, FDA began inspecting compounding pharmacies in February 2013 that, according to the agency, were known to produce “high-risk” sterile compounded drugs. These inspections were not the for-cause inspections that FDA has typically done in the past when inspecting compounding pharmacies. Rather, FDA’s objective was to determine whether certain pharmacies that were known to have produced high-risk sterile drug products in the past posed a significant threat to public health from poor production practices. According to FDA officials, the agency identified 31 compounding pharmacies to inspect using criteria that included whether a warning letter had been issued to the pharmacy in the past 10 years, whether the pharmacy compounded sterile injectable drugs, whether there were adverse drug events reported, or whether there were complaints received from the FDA district office or others. FDA officials said they also reviewed related congressional committee reports that mentioned specific pharmacies and reviewed pharmacy websites. In summarizing these efforts, FDA reported that pharmacies meeting at least two of FDA’s criteria were included in the inspections. As of April 29, 2013, FDA had issued form 483 inspection observation reports to 30 of the 31 compounding pharmacies it inspected as part of its recent inspections.inadequate, or both, clothing for sterile processing, lack of appropriate air filtration systems, insufficient microbiological testing, and other practices that create risk of contamination. FDA’s observations included inappropriate or As of May 21, 2013, 7 of the 31 compounding pharmacies had voluntarily recalled some or all of their sterile compounded products as a result of observations from these recent FDA inspections. For example: FDA sampled a compounded sterile injectable solution during one of its inspections in March 2013 and found bacteria in the product, which resulted in the compounding pharmacy immediately announcing a nationwide recall of all of its sterile compounded products, which included over 50 sterile drug products. Another compounding pharmacy recalled its sterile drug products that had not yet reached the expiration date listed on the product because of a lack of sterility assurance. This recall included approximately 95 dosage units of various sterile compounded drugs that the pharmacy supplied to the offices of licensed medical professionals located within its state; however, some patients that received products from those medical professionals may live in other states. Further, according to our analysis, 10 of the 31 high-risk compounding pharmacies that FDA inspected were also registered in FDA’s drug manufacturer database. Even though these compounding pharmacies were registered with FDA, agency officials said the agency does not routinely inspect these pharmacies despite their registration because registration alone does not trigger a routine inspection. Additionally, 8 of the 10 were individual facilities of two different larger compounding pharmacies, both of which had websites advertising they were FDA- registered. The four states we reviewed—California, Connecticut, Florida, and Iowa—have each recently taken actions, such as working with national pharmacy organizations, to improve their oversight of drug compounding. In addition, national pharmacy organizations have undertaken efforts to help states oversee drug compounding. However, some states may lack the resources to provide the necessary oversight of drug compounding. All four of the states we reviewed recently took steps to potentially strengthen their oversight of drug compounding. These steps included developing an inspection program for sterile drug compounders that dispense drugs in the state, but are located outside of the state, and drafting new legislation to require the board of pharmacy to conduct on- site inspections prior to licensing a pharmacy. Examples of actions taken by each of the four states we reviewed follow: California: On May 29, 2013, the California Senate passed legislation that would prohibit any pharmacy from compounding or dispensing, and any nonresident pharmacy from compounding for shipping into the state, sterile compounded drug products unless the pharmacy has obtained a sterile compounding pharmacy license from the California Board of Pharmacy; require inspection of resident and nonresident pharmacies by the board prior to licensure; require resident and nonresident pharmacies to report adverse events for compounded drugs to both the California State Board of Pharmacy and MedWatch, FDA’s adverse event reporting system; and require resident and nonresident pharmacies to submit a list of all sterile medications compounded by the pharmacy during the prior 12 months before obtaining an annual renewal of the sterile compounding license, among other requirements. Currently, California law requires that a pharmacy that compounds sterile injectable drug products in California, or that ships sterile injectable products into California, obtain a special license issued by the board; however, the law exempts from this licensure requirement certain pharmacies that have current accreditation from a private accreditation agency approved by the board.pharmacies that obtain licensure by the board are subject to prelicensure inspections, as well as annual inspections prior to renewal of the license. Nonresident pharmacies must provide a copy of a recent inspection report issued by the pharmacy’s licensing agency, or a recent report from a private accrediting agency approved by the board, documenting the pharmacy’s compliance with board regulations regarding the compounding of injectable sterile drug products. In describing the board’s support of the proposed legislation, a California State Board of Pharmacy official told us that the board believed it important that all California and nonresident pharmacies compounding sterile injectable drugs be subject to state inspections, including those with an accreditation. As of June 14, 2013, the legislation was pending before a California State Assembly committee. Connecticut: An official from Connecticut’s Drug Control Division—which conducts inspections of pharmacies in the state and houses the Commission of Pharmacy Board Administrator, which oversees pharmacy licensing—told us that, as of April 2013, the state was working to tighten its regulations and implement inspection practices regarding in-state sterile drug compounders. For example, the state plans to begin conducting more thorough pharmacy inspections in which the inspectors consider additional attributes, such as compliance with USP standards on sterile compounding, the physical environment where the facility is located, and the number of sales representatives employed by the pharmacy. In addition, the Drug Control Division is working to propose new regulations to allow the state to better track and regulate the sale of compounded sterile medications produced by resident and nonresident sterile drug compounders. However, the details of these proposed regulations were not available as of June 2013. Florida: On November 20, 2012, the Florida Board of Pharmacy issued an emergency rule requiring all resident pharmacies and nonresident pharmacies that ship drugs to Florida to immediately notify the board of their compounding activities. More than half (55 percent) of the 8,193 responding pharmacies reported that they compound nonsterile products, such as ointments or tablets; and 12 percent reported that they compound sterile products, such as injectable and ophthalmic solutions. Florida found that about one-third (32 percent) of the 946 pharmacies that One goal of perform sterile compounding were nonresident pharmacies.Florida’s emergency rule was to determine the scope of sterile and nonsterile compounding within Florida’s resident and nonresident licensed pharmacies. According to Florida Board of Pharmacy officials, prior to the emergency rule, the board did not know how many pharmacies compounded drugs, how many nonresident pharmacies shipped compounded drugs into the state, or whether they compounded nonsterile or sterile drugs. According to these officials, the board intends to use this newly acquired information to improve the board’s oversight activities, such as to identify and inspect compounding pharmacies. As of May 2013, the Florida Board of Pharmacy was considering whether to require pharmacies to complete an updated survey biennially in order to renew their pharmacy licenses. Iowa: Iowa is inspecting drug compounders that are licensed by the state as nonresident pharmacies and dispensing compounded drugs in Iowa. Iowa established a consultancy services agreement with NABP in December 2012, and inspectors from NABP began inspecting the 581 nonresident pharmacies identified by the state at that time. The results of these inspections are expected to reveal whether the selected pharmacies are compounding drugs in compliance with state regulations. According to Iowa Board of Pharmacy officials, the state does not have information on the extent that Iowa’s licensed nonresident pharmacies compound drugs, how many nonresident pharmacies ship compounded drugs into the state, or whether they compound nonsterile or sterile drugs. However, NABP’s inspections have begun to provide some of this information. As of April 2013, Iowa’s Board of Pharmacy had taken six formal disciplinary actions against five out-of-state compounding pharmacies following NABP inspections and, according to an Iowa Board of Pharmacy official, the board anticipates more disciplinary actions during the remainder of 2013 and early 2014. By the end of 2013 or early in 2014, an Iowa Board of Pharmacy official anticipates that NABP inspectors would visit all nonresident pharmacies licensed by the state. At the national level, pharmacy organizations have undertaken a number of efforts to help states oversee drug compounding. For example, national pharmacy organizations have developed standards for compounded drugs that could be adopted by states. The following are examples of efforts undertaken by national pharmacy organizations. The National Association of Boards of Pharmacy (NABP): NABP has initiated the Compounding Action Plan to identify and inspect compounding pharmacies. It includes continued collaboration on the Iowa nonresident inspection program, discussed above, and the sharing of inspection results and related actions. Through this plan, NABP intends to collect data on the number of compounding pharmacies, including their scope of operations, in all states, and inspect these pharmacies. NABP officials said they believe that many of the 581 nonresident pharmacies licensed and identified by the Iowa Board of Pharmacy also hold licenses with many, if not all, of the other states requiring nonresident licensure. Using the Iowa nonresident licensed pharmacy list as a starting point, NABP sent Iowa’s list to each state to confirm information regarding these pharmacies, such as whether the pharmacy has been disciplined, whether it is engaged in sterile compounding, or whether it is engaged in “nontraditional” compounding activities. In addition, NABP asked all states to identify any known or suspected compounding pharmacies in their state that are not on the Iowa nonresident pharmacy list. As a result, NABP officials told us that NABP added some additional pharmacies to Iowa’s original inspection list. As of June 2013, NABP had inspected 215 pharmacies. In addition to its Compounding Action Plan, NABP created and continues to maintain a Model State Pharmacy Act and Model Rules for states to use when developing new pharmacy laws and regulations, including rules specific to sterile compounding. According to NABP officials, each state has adopted aspects of NABP’s model act and model rules. The Pharmacy Compounding Accreditation Board (PCAB): In 2006, eight national pharmacy organizations established the PCAB, a voluntary accrediting organization for sterile and nonsterile drug compounders. According to an organization official, PCAB’s national standards are based on the consensus of industry experts of those elements that should exist in a pharmacy that adheres to high quality standards. PCAB accreditation indicates that the staff involved in compounding have proper and ongoing training; that the pharmacy uses active pharmaceutical ingredients and inactive materials from appropriate suppliers; that all compounding procedures are fully documented and carried out in conformance with established formulas; and USP standards for compounding. According to a PCAB official, as of June 26, 2013, 176 drug compounding pharmacies received PCAB accreditation, and 124 additional drug compounding pharmacies have applied for PCAB accreditation. Some states may lack the fiscal or staff resources to provide the necessary oversight of drug compounding. A number of officials from state boards of pharmacy attending a December 2012 meeting conducted by FDA expressed confidence that their states had adequate resources to oversee drug compounders, but were concerned about resources in other states. They explained that, until recently, they depended on the states where the pharmacies were located to license and regulate those pharmacies. However, many state budgets have been cut and it is uncertain whether all states have the resources or qualified staff to inspect and otherwise appropriately oversee their licensed pharmacies. The effect of limited state resources may reach across state lines, and it may not be correct to assume that a pharmacy licensed by another state is being regulated adequately. In addition, differences in pharmacy inspection practices among states may affect oversight of drug compounding in other states. For example, each of the four states we reviewed require licensure or registration of nonresident pharmacies that provide pharmacy services to users in the state, and they require nonresident pharmacies applying for a license or registration to have a current license, permit, or registration issued by the regulatory authority of their home state. The states in our review also have generally relied on the home states of the nonresident pharmacies to inspect these pharmacies on a regular basis. However, state officials and officials from national pharmacy organizations we interviewed told us that the frequency of pharmacy inspections and the qualifications of the pharmacy inspectors vary widely among states, and it is uncertain whether all nonresident pharmacies receive adequate oversight from their home states. Of the four states in our review, one required annual inspections of all pharmacies located in the state and one required annual inspections of all sterile drug compounding pharmacies located in the state, while another required routine inspections of retail pharmacies in the state once every 4 years. In addition, three of the four states required all pharmacy inspectors to have a license to practice pharmacy in that state, while one state reported having some inspectors without pharmacist licenses. Officials representing several national pharmacy organizations that we interviewed also expressed concerns regarding whether states have enough resources to regulate and inspect pharmacies on a timely basis. Instead, some states inspect pharmacies only in response to a problem they become aware of through a complaint or adverse drug event. Some of these officials also expressed concern regarding FDA’s resources to oversee drug compounding. For example, officials from NABP told us that both FDA and the state boards of pharmacy need more resources for the oversight of drug compounding. Recognizing the need for additional resources to oversee drug compounders, the bill that the California legislature is considering—a bill that would require nonresident pharmacies shipping sterile compounded drugs into the state to have an on-site inspection by the California Board of Pharmacy prior to licensure—would also require those pharmacies to pay for inspection- related travel expenses. To ensure that compounding pharmacies receive adequate oversight, it is essential to have clear roles for FDA and states regarding the regulation and oversight of drug compounding. The inconsistent federal circuit court decisions complicate FDA’s ability to oversee drug compounding by requiring FDA to approach the regulation of drug compounding differently in different parts of the country. In addition, state approaches to the oversight of pharmacies, including compounding pharmacies, vary depending upon each state’s regulations and the resources each state devotes to licensing and inspecting its pharmacies. Taken together, the different regulatory approaches FDA must take and the variation in how states oversee drug compounding, create gaps in oversight, which could lead to inadequate assurance that public health is protected. To adequately carry out the oversight of compounded drugs, FDA must have data systems in place to produce timely, reliable information on inspections, the findings of those inspections, and enforcement actions taken related to compounded drugs. Without reliable, timely data, the agency will not have the information needed to intercede and protect Americans from unnecessary harm when problems are identified. Recent FDA inspections of 31 entities that produce compounded drugs and the subsequent drug recalls highlight the potential risk to public health of failing to oversee these types of entities. At the same time that FDA lacks complete information on inspections and enforcement actions taken related to compounded drugs, entities that compound drugs may register as manufacturers in the agency’s registration database, and some advertise themselves as FDA-registered. As a result, states and purchasers may incorrectly assume that FDA has approved the products and inspected the facilities for compliance with good manufacturing practices. To help ensure appropriate oversight of the safety of products from the entities that prepare and distribute compounded drugs that have a high potential to adversely affect public health, Congress should consider clarifying FDA’s authority to regulate entities that compound drugs. We recommend that the Secretary of Health and Human Services direct the Commissioner of the FDA to take steps to consistently collect reliable and timely information in FDA’s existing databases on inspections and enforcement actions associated with compounded drugs, and clearly differentiate in FDA’s database, those manufacturers of FDA- approved drugs that FDA inspects for compliance with good manufacturing practices from those entities compounding drugs that are not FDA-approved and that FDA does not routinely inspect. We provided a draft of this report to HHS, which oversees FDA, for comment. HHS provided written comments, which are reprinted in appendix III, and technical comments, which we incorporated as appropriate. HHS stated that our report accurately details the limitations associated with FDA’s current authority to oversee drug compounding. HHS’s comments also support the Matter for Congressional Consideration that Congress should consider clarifying FDA’s authority to oversee entities that compound drugs. HHS neither agreed nor disagreed with our recommendations. Regarding our first recommendation to direct FDA to consistently collect reliable and timely information in FDA’s existing databases on inspections and enforcement actions associated with compounded drugs, HHS stated that although FDA’s FACTS database can be improved to better aggregate data and to facilitate evaluation of compounding pharmacy activities, these deficiencies do not materially impact FDA’s ability to protect the public from harm when problems are identified. We understand that FDA has the ability to access the data associated with compounded drugs by searching under a company name or requesting information across FDA centers and offices; however, as our report notes, FDA lacks ready access to all of the data and lacks the ability to run queries or aggregate the data. For example, when we requested the final inspection classifications for 194 inspections of compounding pharmacies, FDA could not provide this information because, according to FDA officials, the FACTS database does not contain all of the final decisions and obtaining all of the final inspection classifications would require time-consuming manual searches of information maintained in hard copy. As a result, we could not ascertain how many of these inspections found problems that were significant enough for FDA to determine that official action was indicated. Therefore, we continue to believe that FDA should take steps to consistently collect reliable and timely information in its databases on inspections and enforcement actions associated with compounded drugs. Doing so would provide the agency with ready access to key data to inform its decision making on its oversight priorities and allow it to take appropriate action when problems are identified. In its comments, HHS stated that FDA will take steps to further improve its databases to ensure that inspections and actions regarding compounding going forward are coded consistently and are more readily identifiable through electronic searches, and that the final classification for inspections of drug compounders are entered into the FACTS database. These steps are consistent with our recommendation. Regarding our second recommendation, HHS stated that FDA will consider whether it would be possible or appropriate to differentiate in its database those compounding pharmacies that register voluntarily from conventional manufacturers of FDA-approved drugs that are required to register. These conventional manufacturers are already subject to routine inspections by FDA and are required to list the FDA-approved products they manufacture. Therefore, these entities should already be known to FDA. HHS also commented that FDA will provide information to the public about what it means—and does not mean—to voluntarily register with FDA. HHS further stated that FDA has recommended that Congress require pharmacies engaged in nontraditional compounding in the United States to register with FDA and list the drugs they are compounding, all of which is consistent with our recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To identify actions planned or taken by states, we interviewed representatives of the state pharmacy regulatory bodies from four states: California, Connecticut, Florida, and Iowa. We chose these states to provide insight into how a range of states approach the oversight of drug compounding; however, the approaches and experiences of these states are not generalizable to all 50 states. We selected these states to reflect a range of characteristics, including census region, population, number of licensed pharmacies, and variation in compounding regulations. Table 1 lists select data for each selected state. Amendments to the Federal Food, Drug, and Cosmetic Act (FDCA) enacted in 1997, and a series of federal court decisions regarding the validity of those amendments, have resulted in several significant shifts in FDA’s authority and approach to the regulation of drug compounding over the last two decades. Differences in these court decisions have resulted in inconsistent FDA authority to oversee drug compounding, which, according to the agency, has prompted it to apply three different regulatory approaches to compounded drugs depending upon the federal court jurisdiction in which the drugs are compounded. This appendix describes these legal developments. In 1992, FDA issued a compliance policy guide that articulated the agency’s approach to applying the FDCA’s new drug, adulteration, and misbranding provisions to compounded drugs. FDA noted its longstanding policy of deferring to state regulation of pharmacies engaged in traditional compounding activities, but that it was issuing the compliance policy guide to identify those circumstances under which the agency believed establishments with retail pharmacy licenses were engaged in “manufacturing, distributing, and promoting unapproved new drugs” in a manner outside the traditional pharmacy practice of compounding.agency might exercise its enforcement discretion to take action against such establishments for violations of the FDCA’s new drug approval, adulteration, and misbranding provisions. In 1997, Congress passed and the President signed into law the Food and Drug Administration Modernization Act of 1997 (FDAMA) that, among other things, amended the FDCA to expressly permit drug compounding under certain conditions and to exempt compounded drugs meeting these conditions from certain provisions of the FDCA. In particular, under section 503A of the FDCA, as enacted by FDAMA, compounded drugs meeting these conditions were expressly exempt from the requirement that a drug be manufactured in conformity with current good manufacturing practice; that a drug’s labeling carry adequate directions for use; and that the drug is the subject of an approved new drug application. To qualify for these exemptions, the pharmacist, physician, or pharmacy compounding the drug had to meet certain criteria, including refraining from advertising, promoting, or soliciting prescriptions for the compounding of any drug. Shortly after FDAMA’s enactment, a group of seven pharmacies challenged section 503A’s advertising, promotion, and solicitation restrictions in federal district court, alleging that these restrictions violated the Free Speech Clause of the First Amendment. Agreeing with the plaintiffs, the U.S. District Court for the District of Nevada invalidated section 503A’s advertising, promotion, and solicitation restrictions, severing these restrictions from the remainder of section 503A. In 2001, the U.S. Court of Appeals for the Ninth Circuit affirmed the district court’s First Amendment holding; however, the Ninth Circuit took the view that Congress would not have enacted section 503A without the advertising, promotion, and solicitation provisions and, therefore, the law’s advertising, promotion, and solicitation provisions were not severable. As a result, it held that section 503A, in its entirety, was invalid. In April 2002, the United States Supreme Court in Thompson v. Western States Medical Center affirmed the Ninth Circuit’s ruling invalidating section Because 503A’s advertising, promotion, and solicitation provisions.neither the government nor the pharmacies appealed the Ninth Circuit’s severability ruling, the Supreme Court declined to address the validity of the remaining nonadvertising portions of section 503A. One month after the Supreme Court’s ruling in Western States, FDA revised its longstanding 1992 Compliance Policy Guide on pharmacy compounding to provide “immediate guidance on what types of compounding might be subject to enforcement action under current law.”In that guidance, FDA took the position based on the Ninth Circuit’s and Supreme Court’s Western States Medical Center decisions, that “all of section 503A is now invalid.” Accordingly, the agency determined it was necessary to issue guidance outlining the factors the agency would consider in taking enforcement action against a compounding pharmacy for violations of the FDCA. In particular, the agency stated that it would continue to defer to state pharmacy authorities for “less significant” violations of the FDCA but that when a pharmacy’s activities resemble those of a drug manufacturer it would consider enforcement action. The compliance policy guide provides a nonexhaustive list of such activities. The compliance policy guide reflected FDA’s view that, even if a compounding pharmacy has not engaged in these activities, the drugs it compounded would be subject to all of the FDCA’s requirements that apply to manufactured drugs; in the compliance policy guide FDA simply outlined those circumstances under which the agency would actually enforce these requirements against a compounding pharmacy. Four years later, in 2006, a group of 10 pharmacies challenged FDA’s authority to regulate compounded drugs. In that case, FDA asserted that compounded drugs fall within the FDCA’s definition of “new drug” and, therefore, are subject to those provisions of the act that apply to such drugs. The U.S. District Court for the Western District of Texas disagreed with the agency, holding that compounded drugs when created for an individual patient pursuant to a prescription from a licensed practitioner “are implicitly exempt” from the FDCA’s new drug definition and the new drug approval process. On appeal, the U.S. Court of Appeals for the Fifth Circuit reversed the district court’s determination and held that compounded drugs are “new drugs” under the FDCA. The court reasoned that Congress would not have enacted FDAMA’s provisions exempting compounded drugs from certain of the FDCA’s “new drug” requirements had these provisions not applied to compounded drugs in the first instance. To reach this conclusion, the Fifth Circuit considered the severability of section 503A’s nonadvertising provisions. Disagreeing with the Ninth Circuit’s Western States reasoning that Congress would not have enacted section 503A without the advertising provisions, the Fifth Circuit found that the FDCA contained a severability provision and that this provision applied to section 503A. Finding no strong evidence that Congress would not have enacted section 503A without the advertising provisions, the court ruled that the law’s nonadvertising provisions were severable from its unconstitutional provisions. The result of the Fifth Circuit’s decision is that—at least in the Fifth Circuit—compounded drugs are, in fact, “new drugs” under the FDCA; however, these drugs are expressly exempt from certain requirements that apply to “new drugs”—namely, compliance with current good manufacturing practice, certain labeling requirements, and new drug approval requirements—if they comply with the nonadvertising conditions set forth in section 503A. The Ninth Circuit Court of Appeals’ 2001 Western States decision invalidating all of section 503A and the Fifth Circuit Court of Appeals’ 2008 Medical Center Pharmacy decision holding that all of section 503A other than the advertising, promotion, and solicitation restrictions is valid are directly at odds. As a result of these decisions, section 503A is invalid in those states in the Ninth Circuit (Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon, and Washington) and in full force and effect in those states in the Fifth Circuit (Louisiana, Mississippi, and Texas). FDA officials described the agency’s approach to regulating compounded drugs under this incongruous legal landscape as follows: In the Ninth Circuit, the agency takes the approach that all compounded drugs are “new drugs” under the FDCA, and the agency determines whether to consider taking enforcement action against a compounding pharmacy based on whether the pharmacy engages in any of the activities outlined in the agency’s 2002 compliance policy guide on drug compounding. Even if a compounding pharmacy has not engaged in the activities outlined in the compliance policy guide, the drugs it compounds are, as a legal matter, subject to all of the FDCA requirements that apply to “new drugs”; the compliance policy guide simply outlines those circumstances under which the agency will consider enforcing these requirements against a compounding pharmacy. In the Fifth Circuit, FDA determines whether a compounded drug meets section 503A’s exemption from certain FDCA requirements that would preclude the agency from taking enforcement action against a drug compounder for noncompliance with these requirements. For compounding pharmacies outside of the Fifth and Ninth Circuits, which is the majority of the country, the agency applies the criteria in both section 503A and its 2002 compliance policy guide to determine whether to take enforcement action. Table 2 identifies the criteria that a compounded drug must meet to qualify for the exemption under section 503A of the FDCA from certain of the law’s requirements and the criteria in FDA’s 2002 compliance policy guide, which the agency considers in determining whether to take enforcement action against an entity engaged in drug compounding. In addition to the contact named above, Kim Yamane, Assistant Director; Matthew Byer; Sandra George; Drew Long; and Lisa A. Lusk made key contributions to this report.
Drug compounding is the process by which a pharmacist combines, mixes, or alters ingredients to create a drug tailored to the medical needs of an individual. An outbreak of fungal meningitis in 2012 linked to contaminated compounded drugs has raised concerns about state and federal oversight of drug compounding. GAO was asked to update its 2003 testimony on drug compounding. Specifically, this report addresses (1) the status of FDA's authority to oversee drug compounding, and the gaps, if any, between state and federal authority; (2) how FDA has used its data and authority to oversee drug compounding; and (3) the actions taken or planned by states or national pharmacy organizations to improve oversight of drug compounding. GAO reviewed relevant statutes and guidance; reviewed FDA data; and interviewed officials from FDA, national pharmacy organizations, and four states with varied geography, population, and pharmacy regulations. To help ensure that the entities that compound drugs have appropriate oversight, Congress should consider clarifying FDA’s authority to oversee drug compounding. In addition, FDA should ensure its databases collect reliable and timely data on inspections associated with compounded drugs, and differentiate drug compounders from manufacturers. HHS's comments support the need to clarify FDA's authority, and stated that the information in its inspection database could be improved and that it would consider whether it can differentiate compounding pharmacies from manufacturers.
CMS, an agency within HHS, is responsible for much of the federal government’s multi-billion dollar payments for health care, primarily through the Medicare and Medicaid programs. Medicare covers about 40 million individuals 65 years old and older, as well as some disabled individuals. Eligible individuals enroll to receive part A insurance, which helps pay for inpatient hospital, SNF, hospice, and certain home health services. Most Medicare beneficiaries also elect to purchase part B insurance, which helps pay for physician, outpatient hospital, laboratory, and other services. Medicaid is a state-administered health insurance program, jointly funded by the federal and state governments, that covers approximately 40 million eligible low-income individuals including children and their parents, the aged, blind, and disabled. Each state administers its own program and determines, under broad federal guidelines, eligibility for, coverage of, and reimbursement for specific services and items, such as orthotics and DME. In 2000, about 5.5 million low-income aged and disabled Medicare beneficiaries were also covered by Medicaid. For such beneficiaries, Medicare serves as their primary health care coverage, while Medicaid pays for certain other health care costs. The extent of their Medicaid coverage is primarily dependent on their income. For the lowest income beneficiaries, Medicaid covers long-term care, prescription drugs, and their Medicare part B premiums, deductibles, and copayments, as well as other items and services not available through Medicare. For those dually- eligible beneficiaries with somewhat higher incomes, Medicaid support is limited to cost sharing and/or part B premiums. Benefits covered by Medicare are broadly established in statute and further delineated through regulation and other means, such as rulings. Generally, a regulation is a substantive requirement promulgated by a federal agency that has the force and effect of law. Such regulations are generally first proposed, to allow for a period of public notice and comment, before they are finalized. In addition to such substantive regulations, CMS also issues interpretive rules—including administrative rulings—that are decisions of the agency’s administrator that serve as final opinions and statements of policy and interpretation. They provide clarification on, and interpretation of, complex or ambiguous provisions of the law or regulations relating to Medicare, Medicaid, and related matters. CMS characterizes rulings as interpreting previously promulgated policies, rather than establishing new policies. Rulings are final upon issuance without prior public notice or comment period. Medicare pays for orthotic devices and DME under both its part A and part B benefits. Through its post-hospital extended care services benefit under part A, Medicare pays for inpatient skilled nursing care and rehabilitative services furnished by a SNF. To qualify for this benefit, a Medicare beneficiary must be admitted to the SNF within a short period (generally 30 days) after a hospital stay of at least 3 days and receive daily skilled nursing care or rehabilitative services for a condition related to hospitalization. Medicare’s part A per diem payment generally covers all necessary services and supplies provided by the SNF, such as room, board, and drugs, for as long as the need for daily skilled care continues, up to 100 days of care per benefit period. Medicare also covers both orthotics and DME under the part A per diem payment for a beneficiary in a SNF. HCFA considered whether orthotics should be separately reimbursed under part B when the SNF payment method was being developed. In advising the Congress on what to include in the part A per diem payment, the agency took the position that it would be appropriate to include orthotics in the SNF part A per diem payment, because orthotics were frequently used, and could be overprovided, if separately reimbursed under part B. Medicare also covers orthotic devices and DME under part B in some instances. Orthotic devices are covered under part B for a beneficiary who is not in a part A-covered SNF or hospital stay. In contrast, DME is not covered under part B for a beneficiary in a facility that is primarily engaged in providing skilled nursing or rehabilitative services. These facilities include SNFs certified for Medicare part A payment and other facilities that meet criteria developed by HCFA and used to determine whether a facility is a SNF for DME payment purposes. However, Medicare part B covers both orthotics and DME for a beneficiary living at home or in an institution (other than a Medicare-certified SNF or other facility that meets HCFA’s SNF criteria) that serves as a home. Information summarizing Medicare coverage for orthotics and DME is presented in table 1. Suppliers and practitioners bill Medicare part B for orthotics and DME using the Healthcare Common Procedure Coding System (HCPCS) codes. Certain HCPCS codes are designated for orthotic devices, while others are designated for DME. Orthotic HCPCS code listings give a brief description of the device and state whether the device is prefabricated or custom- fabricated. Prefabricated, off-the-shelf devices are manufactured in quantity, such as an adjustable, semi-rigid, knee-joint brace. A prefabricated orthotic may be trimmed, bent, adjusted, or otherwise modified for use by a specific patient. An orthotic device that is custom assembled from prefabricated components is still considered prefabricated. Custom-fabricated devices are individually made for a specific patient, starting with basic materials, such as plastic, metal, leather, or cloth. These would include devices such as an ankle and foot brace that is attached to a shoe to control stability of the ankle and has been custom fabricated based on measurements of the patient’s ankle and foot. Custom-fabricated orthotics include custom-molded devices, which are molded to a model of the patient—such as an ankle and foot brace custom-molded on a casting made from an impression of the patient’s ankle and foot. Orthotics and DME suppliers and providers claim reimbursement for the services and products provided to Medicare beneficiaries under part B from CMS’s four DMERCs. DMERCs are responsible for checking the validity of, and paying, orthotics and DME claims. Medicare part B has different methodologies, specified in law, for determining payment amounts for different categories of DME, but generally uses separate fee schedules for each state, based on historical charges that have been updated some years to reflect inflation. There are also upper and lower limits on the fees paid for DME. For orthotics, Medicare uses 10 regional fee schedules, which are also based on historical supplier charges and are subject to upper and lower limits. Payments for DME and orthotics are based on the lesser of the fee schedule amount or the submitted charge. DME and orthotics fee schedules include amounts for newly purchased items, rented items, and for purchase of used devices. The beneficiary is responsible for a 20 percent copayment for DME and orthotics covered under part B. HCFA issued its orthotics ruling in September 1996 to clarify the distinction between certain DME and orthotics for Medicare part B billing purposes. HCFA’s ruling helped address concerns about the manner in which some suppliers were billing Medicare for a system consisting of leg, arm, neck, and back supports that attached to a base. These suppliers were billing for each attached support as a separate orthotic brace. HCFA’s ruling stated that it has been Medicare’s longstanding policy to treat braces attached to DME or other medical or nonmedical equipment as DME. The ruling also said that only braces that could be used independently qualified as orthotics. Attached devices that brace individuals, such as items that attach to wheelchairs, would not be paid under Medicare’s orthotics benefit. Shortly after it was issued, several beneficiaries, a manufacturer, and several suppliers of attached bracing devices challenged the ruling in court, claiming HCFA did not follow appropriate procedures because it should have promulgated this decision as a regulation after public notice and comment. However, a federal appellate court found that HCFA had acted properly in issuing it as a ruling, which is an appropriate way to interpret existing policy. The court also found that the interpretation in the ruling was wholly supportable and that the treatment of seating systems as DME was consistent with congressional intent. In the late 1980s and early 1990s, HCFA and its contractors had become increasingly concerned about how certain suppliers were billing Medicare. Particular concern was raised by the way in which suppliers of an item manufactured by a company called OrthoConcepts were billing Medicare. The OrthoConcepts system consisted of leg, arm, neck, and back supports that attached to a base that could be put on wheels. OrthoConcepts said that its adjustable system of multiple supports provided orthotic support to the body, which would be particularly helpful to individuals with severe neurological problems who needed to be properly positioned. Suppliers of its system were billing each attached support as a separate orthotic brace, using multiple orthotics billing codes that described braces expected to be used independently of other medical equipment. As DMERCs became aware of this billing practice, they began to deny these orthotics claims because the attached bracing devices being provided as a group appeared to be similar in function to a seating system or customized wheelchair, which were both considered DME. However, some of the claims denials were subsequently overturned by an ALJ, who hears Medicare appeals on denied claims. These decisions by an ALJ prompted HCFA to issue its September 1996 ruling, which is binding on these judges. HCFA’s ruling limited payment for orthotics under Medicare part B to leg, arm, back, and neck braces that can be used independently of other equipment. (See app. II for an excerpt from the Conclusions and Illustrations section of the ruling to demonstrate its practical application.) As a result of the ruling, attached bracing devices, such as OrthoConcepts’ items and other attached devices, were placed in the DME benefit category and could no longer be billed as orthotics. The ruling cited the Congress’ action in the Omnibus Budget Reconciliation Act of 1990 (OBRA) as evidence for Medicare’s policy on whether attached items could be considered orthotics. OBRA provided that wheelchairs measured, fitted, or adapted for a particular patient, and assembled or ordered with customized features, modifications, or components intended for a specific patient’s use, were considered customized DME. A committee report on the OBRA legislation discussed how wheelchairs could be customized by adding attachments, such as postural control devices and custom-molded cushions, inserts, or lateral supports designed to brace the individual using the wheelchair. HCFA concluded in its ruling that, while the Congress specifically addressed only customized wheelchairs and their accessories in OBRA, it also intended that devices attached to noncustomized wheelchairs be considered part of the wheelchair and, therefore, DME. Concern about whether HCFA’s issuance of its ruling violated statutory requirements was the focus of a court challenge in 1997. The ruling was challenged by OrthoConcepts, whose seating system was affected by the ruling; two Medicare beneficiaries, who used the OrthoConcepts product; and three DME suppliers of the OrthoConcepts product. These parties argued that the ruling was invalid because it was adopted without following the prescribed notice and comment procedures for a substantive rule and that the agency’s refusal to classify the OrthoConcepts seating system as orthotics was arbitrary and capricious. After these parties were initially successful in challenging the ruling in the United States District Court for the District of Massachusetts, HCFA appealed the lower court’s decision. On July 27, 1998, the United States Court of Appeals for the First Circuit found that HCFA’s characterization of the OrthoConcepts seating system as DME was consistent with the agency’s earlier stated position covering such devices and that the agency had merely clarified its policy. Further, the court held that HCFA was not required to provide for public notice and comment before issuing the ruling because it was interpretive rather than legislative or substantive. Because HCFA had followed federal requirements for an interpretive rulemaking, the court also held that the agency had not acted in an arbitrary and capricious manner in issuing the ruling. Furthermore, the court found that the interpretation in the ruling was wholly supportable and that the ruling’s treatment of seating systems as DME was consistent with congressional intent. The Supreme Court denied a request to hear a further appeal. As a result of HCFA’s ruling, attached bracing devices are now clearly classified as DME and cannot be billed as orthotics, which affects beneficiaries who live in nursing homes. Part B no longer pays claims for attached bracing devices for beneficiaries in institutions primarily engaged in providing skilled nursing care because part B does not cover DME in these settings. HCFA and the DMERCs developed criteria and guidance on how to define such institutions that prohibit payment for DME for beneficiaries in most nursing homes—not just Medicare-certified SNFs. These beneficiaries would need to purchase such devices with their own resources or through other payers. When Medicare was established in 1965, facilities providing skilled nursing care under part A were expected to serve as a bridge between the hospital and other, less skilled care or home. Medicare part B did not cover medical and other health services—such as DME—provided in what were then called extended care facilities and are now called SNFs. Medicare part B did pay for DME in facilities that provide a lesser level of care, but as the nursing home industry evolved, fewer did not provide skilled care. In 2001, most nursing homes were certified as SNFs. A significant number of Medicare beneficiaries reside more or less permanently in SNFs or other nursing homes that DMERCs consider as meeting HCFA’s criteria for a SNF for DME payment purposes. Such beneficiaries are therefore unable to obtain Medicare coverage for DME, while other beneficiaries living in congregate settings such as assisted living facilities, as well as those living at home, do receive DME coverage. Following the ruling, claims were no longer paid for attached bracing devices for beneficiaries living in nursing homes, which caused a drop in the number and amount of such claims paid by Medicare. Medicare expenditures for such devices declined by at least $1.4 million between 1996 and 1997, and expenditures remained lower in subsequent years. Prior to the ruling, the HCPCS coding system had nine codes that described bracing devices that attached to wheelchairs. Suppliers used these codes to bill for such items under Medicare’s orthotics benefit category and DMERCs paid such claims. These devices included one back support to position wheelchair users and eight mobile arm supports to assist them in moving their hands and arms. (See table 2 for information on these nine devices.) These codes were unlike other orthotics codes because most of the other HCPCS orthotics codes were for braces designed to be used independently of other equipment. In addition, most other items that attached to wheelchairs—such as special headrests to provide postural support—had codes that categorized them as DME and were paid under the DME benefit category. To develop a conservative assessment of the effect of the ruling on claims payment, we analyzed Medicare claims data for the nine attached bracing devices that were classified in the DME—rather than the orthotic—benefit category as a result of the clarification in the ruling. Our analysis showed that Medicare part B expenditures for the nine attached bracing devices provided to beneficiaries in nursing homes dropped by about $1.4 million between 1996 and 1997 and the number of claims paid for these beneficiaries for such devices declined from about 3,200 claims in 1996 to only 11 claims in 1997. Furthermore, the reduction has continued, with no claims paid for the nine attached bracing devices for beneficiaries in nursing homes in either 1999 or 2000. (See fig. 1.) However, our estimate of the change in Medicare spending for attached bracing devices for nursing home residents prior to and after the ruling is conservative because payment under the nine codes we analyzed does not represent all payments for such devices. Some suppliers—such as those providing OrthoConcepts’ products—were billing for attached bracing devices using codes for nonattached braces. Because both attached and nonattached items were being billed using these codes, we could not isolate the claims for attached items from claims for nonattached items. As a result, we could not analyze all billing in the orthotics benefit category for attached bracing devices prior to the ruling. The effect of the ruling was to make beneficiary place of residence pivotal as to whether Medicare would reimburse for attached bracing devices under part B. HCFA’s ruling did not affect Medicare beneficiaries living in their own homes, or settings such as assisted living facilities, because attached bracing devices that are considered DME are covered for beneficiaries in those settings. The ruling affected beneficiaries who are long-term residents of SNFs and other institutions primarily engaged in providing skilled nursing care because DME is not covered by part B for beneficiaries in these facilities. If the beneficiary is in a part A-covered stay, both orthotics and DME are included in the per diem part A payment. However, when a beneficiary is not in a Medicare part A-covered stay, part B will cover orthotics, but not DME, including customized DME items that are uniquely constructed or substantially modified for a specific beneficiary. Some beneficiaries who reside in SNFs and other institutions primarily engaged in providing skilled nursing care and need attached bracing devices that are not paid for through Medicare can obtain them through other sources. For example, certain state Medicaid programs separately cover attached bracing and similar devices as customized DME for nursing home residents, and other Medicaid programs may include payment for these devices in their per diem rates. However, other beneficiaries may have to pay out of pocket or forgo using such devices. The policy of not covering DME for beneficiaries in facilities primarily engaged in providing skilled nursing care has its roots in the early years of the Medicare program. When the Congress created the Medicare program in 1965, part A was designed to cover only hospitalizations and relatively short-term, post-hospital care in the home or in a facility that provided skilled nursing care. Part A post-hospital care in such a facility was expected to involve skilled nursing or rehabilitative care, which would serve as a bridge between the hospital and other, less intense nursing care or therapy. In this skilled nursing home environment, Medicare did not pay for any service, drug, or other items under part A—including DME and orthotics—that could not be paid for if furnished in a hospital. Payment under part A for a beneficiary’s SNF stay would cover only such needs as would be covered for a beneficiary’s hospital stay. When the Medicare program began, facilities providing skilled nursing care were not expected to serve as patients’ residences past the immediate recovery from their hospitalization. Medicaid’s coverage of nursing home care is broader than Medicare’s, because Medicaid also covers institutional care for beneficiaries who do not need skilled nursing care. In 1971, the Congress expressly designated intermediate care facilities (ICF) as a service states could cover under Medicaid. ICFs were defined as providing regular health-related care and services to individuals who needed institutional care and services above the level of room and board, but not the level of care a hospital or a SNF would provide. State Medicaid policies, rather than the statute’s distinction in the types of care provided, determined whether nursing homes were designated as SNFs or ICFs. In some states, almost all nursing homes were designated as SNFs, although many of these SNFs served longer term residents who would be receiving care similar to that provided by ICFs in other states. Under the original 1965 Medicare statute, part B did not pay for medical and health services provided by hospitals, extended care facilities (now known as SNFs), or home health agencies. As a result, DME and other ancillary services—such as physical therapy—were not paid for under part B in a SNF. In 1967, the law was changed to eliminate the prohibition on part B payment for certain ancillary services provided in a SNF. In a report accompanying the 1967 legislation, the Senate Finance Committee noted that retaining a sweeping part B prohibition against paying for any services under part B in a SNF would deprive a beneficiary who had exhausted, or never qualified for, part A benefits of any payment for services that—in another setting—would be separately coverable under part B. However, the Congress added language that retained the prohibition on paying for DME under part B in a SNF, at the same time that it allowed part B payment in a SNF for other ancillary services. HCFA and its carriers had to delineate when a facility was primarily engaged in providing skilled nursing care, particularly for facilities that were not Medicare- or Medicaid-certified SNFs, such as ICFs. In 1982 and 1984, HCFA published rulings with criteria to determine under what circumstances a facility would be classified as primarily engaged in providing skilled nursing care. A facility has to meet five criteria to be considered as primarily engaged in providing skilled nursing care: Nursing services are provided under the direction or supervision of one or more registered, licensed practical, or vocational nurses. Nursing personnel, including nursing aides or orderlies, are on duty on a 24-hour basis. On average, the ratio of full-time equivalent nursing personnel to the number of beds (or average patient census) is no less than 1 to 15 per shift. Bed and board are provided to inpatients in connection with the furnishing of nursing care, plus one or more medically related health services, such as physicians’ services; physical, occupational, or speech therapy; diagnostic and laboratory services; and administration of medication. The facility is not licensed or certified solely as an ICF. These criteria provided a means for identifying facilities that may not meet all of the requirements for SNFs but could be classified as primarily engaged in providing skilled nursing care for the purposes of prohibiting part B DME coverage. In a 1985 court case, HCFA indicated that about 90 percent of the 11,000 ICFs were classified as primarily providing skilled nursing care, leaving about 10 percent of ICFs as being facilities in which beneficiaries could have part B coverage for their DME. ICF as a category of nursing home distinct from a SNF under Medicaid disappeared when the Omnibus Budget Reconciliation Act of 1987 combined them into a single category, nursing facility (NF). A single set of requirements was developed for all nursing homes participating in Medicare and Medicaid. With the single set of participation requirements and more generous Medicare coverage of stays, many more nursing homes became wholly or partially certified as Medicare SNFs to be eligible for part A payment. Most of their residents would, however, still need longer term less skilled services that would not qualify for part A coverage. In 2001, most nursing home residents were in SNFs, including Medicare beneficiaries who were long-term residents. Although they are in SNFs, these Medicare beneficiaries may not be receiving a level of care that would qualify them for the Medicare part A-covered SNF benefit or otherwise might not be eligible for this coverage, which is only post- hospital and for a maximum of 100 days. Such beneficiaries who are paying for their care out of their own pockets or through other payers are not eligible for part B DME benefits that they could receive if living at home or in an assisted living facility. This prohibition includes even paying for items that need to be customized for them, such as customized wheelchairs. Beneficiaries in NFs are also included in the group for which DME is not payable under part B. The four DMERCs have issued guidance to suppliers indicating that they will not pay for DME under part B in any nursing home. For example, the region B DMERC supplier manual, dated June 2000, states “DME and related supplies and accessories are not covered by Medicare part B and claims must not be submitted to the DMERC for patients in a SNF or NF, regardless of whether the patient is in a Medicare covered stay or not. This is true even if the nursing facility could be considered the patient’s permanent residence.” CMS officials noted that DMERCs do not pay for DME in nursing homes because DMERCs presume that these facilities meet the criteria for being primarily engaged in providing skilled nursing care for DME part B payment purposes and, therefore, cannot be considered as a beneficiary’s home. If the ruling were rescinded by CMS and attached bracing devices were paid as orthotics, annual spending under Medicare part B for such devices for beneficiaries in nursing homes would increase modestly if utilization returned to the pre-ruling level. However, several factors suggest that utilization could increase more with the ruling’s rescission. The effect on Medicaid expenditures is less certain. Because state Medicaid coverage policies are not uniform, rescinding the ruling would have a varying effect on states’ Medicaid expenditures. It is difficult to predict with confidence how much Medicare payments might increase if the ruling were rescinded. For example, if the utilization level returned to the pre-ruling level, spending increases would be modest. Rescinding the ruling would move the nine HCPCS codes for attached bracing devices back into the orthotics benefit category. If the change were limited to billing under those nine codes and we assumed no growth in future billing, claims volume might only return to the pre-ruling level. This would be an increase of about 3,000 claims, and payment increases of about $1.8 million per year—given the amounts Medicare currently pays for these items, which generally now cost between $500 and $800. However, as discussed above, this estimate is based on a claims analysis that does not include all the billing for attached devices that occurred before the ruling. Because some suppliers billed attached bracing devices using codes that were not specific for such devices, all of the claims paid prior to the ruling for attached bracing devices cannot be identified with certainty. Moreover, several factors could lead to considerable growth in the use of such devices, which would increase Medicare costs more substantially than our conservative estimate. First, the number of Medicare beneficiaries is likely to grow significantly over time, with the number over age 85 growing fastest, which would likely increase demand for bracing devices in nursing homes. In addition, estimates of the number of beneficiaries who might use attached bracing devices are higher than the prior utilization levels for the devices we identified. Our analysis of data maintained by CMS on characteristics of nursing home residents identified about 53,000 nursing home residents from July 1999 through June 2000 who at that time were 65 years and older, were likely eligible for Medicare part B, and were wheelchair-bound with disabling medical conditions, pressure ulcers, and functional limitations. Others have also developed estimates on the number of elderly nursing home residents with characteristics that indicate that they could potentially use attached bracing devices. These estimates vary considerably—ranging from 35,000 individuals by OrthoConcepts to almost 170,000 individuals by researchers at the University of Pittsburgh. HCFA developed an estimate of as many as 80,000 individuals who might potentially use these attached bracing devices. Second, should the ruling be rescinded, Medicare part B would pay for attached bracing devices for nursing home residents, providing financial incentives that could lead to increased utilization. For example, suppliers who could profitably furnish attached bracing and related devices to beneficiaries in nursing homes would have a financial incentive to supply that market. Manufacturers would have incentives to develop new products that fit within the orthotics definition—such as chairs that provide “orthotic” support—if such items could be paid for under part B. Many items that support and position wheelchair-bound individuals could be described as having an orthotic benefit, including the chair itself. Furthermore, some nursing homes might shift a portion of the costs of their beneficiary services to Medicare. For example, to increase their revenues, nursing homes could substitute orthotics devices that could be paid separately under part B for items of DME that are not separately paid under part B. Finally, if the ruling were rescinded, the distinction between DME and orthotic devices would be blurred, making it more confusing for providers who are trying to bill appropriately and more difficult for DMERCs to identify and deny claims that were inappropriately billed. In addition to increasing Medicare expenditures, rescinding HCFA’s ruling would also affect state Medicaid expenditures for beneficiaries who are dually eligible for Medicare and Medicaid. These effects also cannot be quantified with certainty. The impact on a particular state’s spending would depend on its current coverage policies for customized DME, increases in the use of such items, and changes in state reimbursement policies. For example, states paying separately for customized DME—for example, Michigan, Ohio, and Washington—would likely see their expenditures decrease. Since Medicare would become the primary payer for such items, these states would be responsible only for the copayments and deductibles for these beneficiaries. However, increases in the use of such devices could significantly affect potential Medicaid cost savings. Other states—such as Florida—do not separately cover customized DME. If the ruling were rescinded, these states would become responsible for copayments and deductibles for Medicaid-eligible beneficiaries, which could cause states’ payments to increase. However, these states may offset potential cost increases if they reduced their Medicaid per diem rates. Such reductions could be justified because these states would now be required to separately cover a portion of the cost of items that had been previously covered in their nursing homes’ per diem rate. The rescission of HCFA’s ruling on orthotics would raise program integrity concerns. If HCFA’s ruling on orthotics were rescinded by CMS, the requirement in BIPA aimed at increasing program integrity by restricting payment for custom-molded orthotics to qualified providers would not apply to the attached bracing devices we identified as being affected by the ruling. Even if some attached bracing devices were affected by the new BIPA requirement after the ruling’s rescission, this requirement may have limited potential for curbing inappropriate orthotic payments because most Medicare payments are for orthotics not covered by the requirement and, if industry trends continue, proportionally fewer devices may be covered by the requirement in the future. In addition, the ruling’s rescission could lead to inappropriate billing because suppliers would have more difficulty determining if items should be billed as orthotics or DME, given that the distinction between some items in these two benefit categories would be less clear. Furthermore, Medicare beneficiaries in nursing homes have been the target of fraudulent or abusive billing in the past for orthotics, DME, and other services. Therefore, should the ruling be rescinded, additional controls would be needed. The BIPA requirement was developed because the HHS OIG had reported on problems related to Medicare orthotics in recent years, including inappropriate billing practices associated with these devices. For example, the OIG found that, compared to certified suppliers, noncertified suppliers are more likely to inappropriately provide or bill for orthotics. The OIG recommended that HCFA require that only qualified practitioners provide beneficiaries with certain kinds of orthotic devices. BIPA modified the Medicare requirements related to customized items to stipulate that Medicare will pay for custom-molded orthotics only if furnished by a qualified practitioner and fabricated by a qualified practitioner or supplier. The statutory definition of qualified practitioner includes a physician; an orthotist who is licensed, certified, or has credentials and qualifications approved by the HHS Secretary; or a qualified physical therapist or occupational therapist. The language added by BIPA describes a custom-fabricated orthotic as an item that (1) requires education, training, and experience to fabricate, (2) is included in a list of items to be developed by CMS, and (3) is individually fabricated over a positive model of the patient. CMS will be working with experts in the field of orthotics, using a negotiated rulemaking process, to develop the list of custom-fabricated orthotic items subject to the new requirement. Professionals in the field of customized seating and orthotics told us that they believe the new BIPA requirement relating to qualified providers will help address some problems related to inappropriate billing. They also said that the requirement will improve the quality of care provided to beneficiaries by ensuring that providers have the knowledge and skills needed to craft and fit custom-molded orthotic devices. However, the BIPA requirement regarding qualified practitioners and suppliers may have limited potential for curbing inappropriate orthotic payments in the program as a whole for several reasons. Medicare expenditures for custom-molded orthotics amounted to less than 30 percent of Medicare spending for orthotics in 2000. Furthermore, the requirement may apply to an even smaller percentage of covered orthotic devices in the future, because due to technological advances, more prefabricated devices that can serve functions similar to customized components with little or no alteration are entering the market. Therefore, if this trend continues, proportionately fewer devices will be covered by the new BIPA requirement because the payment restriction is limited to custom-molded orthotics. Finally, limiting payment to qualified practitioners and suppliers does not, in itself, completely resolve questionable billing practices because some of these providers have also billed Medicare inappropriately. For example, in 1997, the HHS OIG reported that certified orthotists billed improperly for items that were not medically necessary or not provided as billed, but to a lesser degree than other suppliers. In 1999, the OIG also reported on instances of improper billing for therapy by physical and occupational therapists working in SNFs —professionals who can be considered qualified practitioners and may supply custom-molded orthotics under the BIPA requirement. If the ruling were rescinded, the new requirement in BIPA that Medicare pay only qualified practitioners and suppliers for custom-molded orthotics would not apply to the attached bracing devices that we identified as affected by the ruling. BIPA’s requirement applies only to custom-molded orthotic devices, not all custom-fabricated ones. The devices we identified as being affected by the ruling are not custom-molded because they are not made over a positive model of the patient’s body part. If HCFA’s ruling on orthotics were to be rescinded, a heightened level of oversight of orthotics billing would be critical to safeguard program dollars. Concerns about improper billing prompted HCFA to issue its orthotics ruling to clarify the distinction between DME and orthotics for Medicare part B billing purposes in the first place. Rescinding the ruling would once again blur the distinction between DME and orthotics, increasing the potential for inappropriate billing—both intentional and unintentional. A heightened level of oversight would be also be critical, because the OIG and we have reported that Medicare beneficiaries in nursing homes can be an attractive target for fraudulent or abusive billing for orthotics, DME, and other services. Because nursing homes are institutions with a large number of co-located beneficiaries, providing services to multiple individuals in this setting can help maximize profits for providers and suppliers. Although most providers and suppliers are honest and bill appropriately, some, including certain durable medical equipment and orthotics suppliers, have been involved in fraudulent or abusive billing of Medicare for services and supplies furnished to nursing home residents. Other controls could enhance safeguards associated with Medicare reimbursement for orthotics, should the ruling be rescinded. In the past, Medicare expenditures have increased more than anticipated after a coverage policy change, due, in part, to inappropriate billing. Without adequate monitoring of orthotics payments, rescinding the ruling could have a similar outcome. DME claims are currently monitored so that DMERCs can follow payment trends over time for groups of codes for similar types of items (such as leg braces). If the ruling were rescinded, DMERCs might have to extend their monitoring in order to analyze payment trends for attached devices. Through monitoring claims billing, DMERCs would be more likely to spot any questionable trends. If such trends were identified, DMERCs could examine a sample of questionable claims and their related medical records and take other steps as needed to determine if the items were medically necessary and provided as billed. A prior authorization process, such as those used by some state Medicaid programs for higher priced or other selected orthotic or DME items, may also provide better control, should the ruling be rescinded. These Medicaid programs review medical justifications and a description of the orthotic or customized DME item before it is provided to the beneficiary. If the item is justified, Medicaid notifies the supplier in advance that it will pay for the item and the amount it will pay. The Medicaid prior authorization process helps ensure program integrity because it establishes that the device is medically necessary. Some providers and suppliers also noted that prior authorization protects them from the risk of supplying devices without knowing whether and what they will be paid. However, the use of the prior authorization process by the Medicaid program involves an investment of time and resources for prior review of supporting documentation. For Medicare, DMERCs do not use all the elements of a prior authorization process. However, they have begun to use a process for determining coverage—but not payment—in advance for a few items of DME. As of October 1, 2001, as part of ongoing program integrity efforts, DMERCs will accept requests from beneficiaries and suppliers for an advance determination of Medicare coverage for customized DME, which is an item that has been uniquely constructed or substantially modified for a specific beneficiary. This process differs from the prior authorization used by Medicaid programs in the states whose processes we reviewed because an advance determination of Medicare coverage does not guarantee a specific amount that Medicare will pay for an item. As a result, suppliers will be uncertain about how much reimbursement to expect for customized wheelchairs and accessories that they supply to beneficiaries. Practitioners reported that such uncertainty affects suppliers’ willingness to provide customized items to beneficiaries. HCFA’s 1996 ruling on orthotics more clearly delineated the circumstances under which Medicare would consider an item as an orthotic or DME for payment policy, and HCFA’s issuance of the ruling was found to be proper in court. The ruling affected relatively few devices and only a small percentage of overall Medicare program expenditures. Without the ruling, there would be some confusion for suppliers about whether bracing devices that are attached to wheelchairs should be billed as DME or orthotics and for DMERCs about whether particular claims should be paid. Revising Medicare payment policy to treat attached bracing devices as orthotics would likely increase program expenditures, although to what degree is uncertain. We would caution that taking such a step without addressing program integrity concerns could lead to an increase in inappropriate payments by Medicare and Medicaid. We provided a draft of this report to CMS for its review and comment. (See app. III for CMS’s comments.) CMS generally agreed with our conclusions. In its comments, CMS observed that, in addition to holding that the orthotics ruling had been properly issued, the U.S. Court of Appeals decision in Warder v. Shalala had also found that the content of the ruling was wholly supportable and that the ruling well effectuated congressional intent by classifying seating systems as DME. We agreed and added language to that effect to our final report. CMS also suggested that our report clearly indicate the precedent-setting effect that rescinding the ruling could have on the provision of certain types of equipment as DME in SNFs. For example, CMS said that if the ruling were rescinded, other components of a wheelchair could be construed to be an orthotic, such as the backrest of a wheelchair. In our report, we discuss and provide examples of the potential impact of rescinding the ruling, stating that there would be financial incentives that could lead to increased utilization if Medicare part B paid for attached bracing devices for nursing home residents. We also note that, if the ruling were rescinded, the distinction between DME and orthotic devices would be blurred, making it more confusing for providers who are trying to bill appropriately and more difficult for DMERCs to identify and deny claims that were inappropriately billed. In general, we agree with CMS’s comments, but we did not change the report because we believe that we had adequately addressed the concerns. CMS also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Administrator of the Centers for Medicare and Medicaid Services, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (312) 220-7600 or Sheila K. Avruch at (202) 512-7277. Other key contributors to this report were Barrett Bader, Sandra Gove, and Craig Winslow. To determine why the Health Care Financing Administration (HCFA) issued its orthotics ruling and if the agency followed required procedures in issuing it, we conducted interviews with officials and representatives from the agency, two Durable Medical Equipment Regional Carriers (DMERC), and reviewed the ruling and agency documents related to its development and issuance. We also interviewed a plaintiff and legal representatives involved in the legal challenge to the ruling and reviewed relevant documents, including the federal district and appellate courts’ decisions on whether HCFA had appropriately followed the proper statutory procedures in issuing the ruling. To assess the impact of the ruling on Medicare beneficiaries, we reviewed Medicare payments and coverage policies for orthotics and durable medical equipment (DME). We analyzed Medicare claims data from the Medicare part B extract and summary system for the Healthcare Common Procedure Coding System (HCPCS) codes associated with the nine attached bracing devices moved from the orthotic to the DME benefit category as a result of the ruling. We also discussed the impact of the ruling on beneficiaries living in nursing homes with Centers for Medicare and Medicaid Services (CMS) officials, and state Medicaid officials in Florida, Indiana, Michigan, Ohio, Pennsylvania, and Washington. We judgmentally chose these states to attain geographic diversity and because these states have a large proportion of elderly Medicare beneficiaries. We also discussed the impact of the ruling with four providers and suppliers of attached bracing and other customized seating accessories, in addition to national organizations representing them, seven clinicians with experience in the seating and positioning needs of elderly and disabled individuals, and two manufacturers of attached bracing and similar devices. We chose the clinicians, providers, suppliers, and manufacturers to interview based on those recommended for their expertise by the national organizations. To assess the financial impact of rescinding the ruling, we reviewed Medicare and Medicaid coverage and payment policies and then interviewed representatives from CMS and Medicaid programs in Florida, Indiana, Michigan, Ohio, Pennsylvania, and Washington. We also developed an estimate of the number of beneficiaries who could use these devices by analyzing national data on nursing home residents from the minimum data set (MDS), and we reviewed demographic findings from other studies. Our MDS analysis used data from July 1999 through June 2000 and was limited to Medicare beneficiaries with all of the following characteristics: (1) functional limitations that required the use of wheelchairs as their primary means of locomotion, (2) one or more of eight neurological conditions that experts told us could indicate a need for attached bracing devices because individuals with such conditions can have poor motor control and may not be able to readily brace or re- position themselves in their wheelchairs, (3) pressure ulcers ranging from mild to severe, and (4) limited ability to move while in bed or get out of bed without requiring extensive assistance from either one or two other people. To evaluate the implications for Medicare program integrity if the ruling were rescinded, we interviewed officials from the Department of Health and Human Services Office of the Inspector General (HHS-OIG) and reviewed pertinent OIG reports. In order to assess the scope of the requirement and its possible effect on attached bracing devices, we analyzed claims data from the statistical analysis durable medical equipment regional carrier associated with custom-fabricated orthotics as defined by the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000. We also interviewed providers and suppliers and organizations representing them and reviewed documents that they provided to us to further assess the effect of the requirement on these devices. We performed our work from January 2001 through March 2002 in accordance with generally accepted government auditing standards. The following discussion is excerpted from the Conclusions and Illustrations section of HCFA’s ruling to demonstrate its application. “A supplier manufactures and supplies medical devices to individuals who are generally elderly and suffer from Alzheimer’s or other debilitating neuromuscular diseases that have caused them to be non-ambulatory, immobile, and confined to a chair or bed. Due to their immobility, these patients may suffer from secondary complications, such as pressure sores, multi-sited contractures, musculoskeletal degeneration and deformities, and circulatory problems. Under a physician’s order, the supplier furnishes individually fitted attachments designed to be used in conjunction with a chair to seat and position the patient. The attachments, which the supplier labels “orthotic braces,” are alleged to position limbs and other body parts properly; restrict motion or weight bearing; immobilize and protect weak musculoskeletal segments; reduce load; retard progression of musculoskeletal deformity; and improve function. The design of the supplier’s “orthotic braces” requires them to be attached to the chair frame, and the “orthotic braces” cannot function or be used apart from the chair to which they are attached. Discussion: Although the devices in question may support or restrict movement in parts of the body, they are not braces within the meaning of because they are integral parts of a seating system and are not designed or intended to be used apart from the seating system.”
In the late 1980s and early 1990s, the Health Care Financing Administration (HCFA), now called the Centers for Medicare and Medicaid Services (CMS), became concerned that some suppliers were improperly billing Medicare for items that attach to wheelchairs and other equipment. Some suppliers were billing for such items using codes for orthodic devices, including arm, back, and neck braces that provide support for or immobilize weak or injured limbs, while others were billing using codes for durable medical equipment, which includes equipment such as wheelchairs and crutches that can withstand repeated use and is appropriate for home use. Whether an item is billed as an orthotic or DME device can affect whether such claims are paid. To clarify Medicare's payment policy on orthotics, HCFA issued a ruling stating that Medicare considered such items to be durable medical equipment rather than orthotics. HCFA issued Ruling 96-1 to clarify the circumstances under which certain items would be classified as orthotics or as DME for Medicare part B payment purposes. A federal appellate court found that HFCA had followed appropriate procedures to issue the rule as an interpretation of Medicare policy, the interpretation in the ruling was wholly supportable, and the treating of seating systems as DME was consistent with congressional intent. HCFA's ruling that attached bracing devices were in the DME benefits category and could no longer be billed as orthotics affects beneficiaries residing in Medicare-certified skilled nursing facilities and other institutions primarily engaged in providing skilled nursing care (SNF). Because Medicare part B does not cover DME in SNFs and other institutions primarily engaged in providing skilled nursing care, claims for such items are no longer paid for residents in nursing homes. This ruling affects residents of all nursing homes, not just SNFs. If HCFA's ruling were rescinded and Medicare's policy changed so that attached bracing devices were classified as orthotics, how much Medicare and Medicaid would spend for orthotics is uncertain. The increase in Medicare spending would depend on how extensively attached bracing devices would be provided to nursing home residents following the ruling's recission. The distinction between DME and orthotics would become less clear, which could lead to inappropriate billing. Therefore, if the ruling were rescinded, additional controls, such as closely monitoring billing and reviewing medical justification for customized items prior to payment, would be vital to help curb potentially inappropriate billing.
Detecting illicit trafficking in nuclear material is complicated because one of the materials of greatest concern—highly enriched uranium—has a relatively low level of radioactivity and is, therefore, among the most difficult to detect. In contrast, medical and industrial radioactive sources, which could be used to construct a dirty bomb, are highly radioactive and, therefore, easier to detect. Although their levels of radioactivity differ, uranium and radioactive sources are similar in that they generally emit only gamma radiation, which is relatively easily shielded when encased in high-density material, such as lead. For example, we reported in March 2005 that a cargo container containing a radioactive source passed through radiation detection equipment DOE had installed at a foreign seaport without being detected because the source was surrounded by large amounts of scrap metal in the container. Plutonium, another nuclear material of great concern, emits both gamma and neutron radiation. Although most currently fielded radiation detection equipment has the capability to detect both gamma and neutron radiation, shielding neutron radiation can be more difficult than shielding gamma radiation. Consequently, plutonium can usually be detected by a neutron detector regardless of the amount of shielding from high-density material. According to DOE officials, neutron radiation alarms are caused only by man-made materials, such as plutonium, while gamma radiation alarms are caused by a variety of naturally occurring sources, including commercial goods such as bananas, ceramic tiles, and fertilizer, as well as by dangerous nuclear materials, such as uranium and plutonium. Because of the complexities of detecting and identifying nuclear material, customs officers and border guards who are responsible for operating detection equipment must be trained in using handheld radiation detectors to pinpoint the source of an alarm, identify false alarms, and properly respond to cases of nuclear smuggling. The manner in which radiation detection equipment is deployed, operated, and maintained can also limit its effectiveness. Given the difficulties in detecting certain nuclear materials and the inherent limitations of currently deployed radiation detection equipment, it is important that the equipment be installed, operated, and maintained in a way that optimizes authorities’ ability to interdict illicit nuclear materials. Although efforts to combat nuclear smuggling through the installation of radiation detection equipment are important, the United States should not and does not rely upon radiation detection equipment at U.S. or foreign borders as its sole means for preventing nuclear materials or a nuclear warhead from reaching the United States. Recognizing the need for a broad approach to the problem, the U.S. government has multiple initiatives that are designed to complement each other that provide a layered defense against nuclear terrorism. For example, DOE works to secure nuclear material and warheads at their sources through programs that improve the physical security at nuclear facilities in the former Soviet Union and in other countries. In addition, DHS has other initiatives to identify containers at foreign seaports that are considered high risk for containing smuggled goods, such as nuclear and other dangerous materials. Supporting all of these programs is intelligence information that can give advanced notice of nuclear material smuggling and is a critical component to prevent dangerous materials from entering the United States. One of the main U.S. efforts providing radiation detection equipment to foreign governments is DOE’s Second Line of Defense program, which began installing equipment at key sites in Russia in 1998. According to DOE, through the end of fiscal year 2005, the program had spent about $130 million to complete installations at 83 sites, mostly in Russia. Ultimately, DOE plans to install radiation detection equipment at a total of about 350 sites in 31 countries by 2012 at a total cost of about $570 million. In addition to DOE’s efforts, other U.S. agencies also have programs that provide radiation detection equipment and training to foreign governments. Two programs at DOD—the International Counterproliferation Program and Weapons of Mass Destruction Proliferation Prevention Initiative—have provided equipment and related training to eight countries in the former Soviet Union and Eastern Europe at a cost of about $22 million. Similarly, three programs at State—the Nonproliferation and Disarmament Fund, Georgia Border Security and Law Enforcement program, and Export Control and Related Border Security program—have spent about $25 million to provide radiation detection equipment and training to 31 countries. However, these agencies face a number of challenges that could compromise their programs’ effectiveness, including (1) corruption of foreign border security officials, (2) technical limitations of equipment at some foreign sites, (3) problems with maintenance of handheld equipment, and (4) the lack of infrastructure and harsh environmental conditions at some border sites. First, according to officials from several recipient countries we visited, corruption is a pervasive problem within the ranks of border security organizations. DOE, DOD, and State officials told us they are concerned that corrupt foreign border security personnel could compromise the effectiveness of U.S.-funded radiation detection equipment by either turning off equipment or ignoring alarms. To mitigate this threat, DOE and DOD plan to deploy communications links between individual border sites and national command centers so that alarm data can be simultaneously evaluated by multiple officials, thus establishing redundant layers of accountability for alarm response. In addition, DOD plans to implement a program in Uzbekistan to combat some of the underlying issues that can lead to corruption through periodic screening of border security personnel. Second, some radiation portal monitors that State and other U.S. agencies previously installed have technical limitations: they can detect only gamma radiation, making them less effective at detecting some nuclear material than equipment with both gamma and neutron radiation detection capabilities. Through an interagency agreement, DOE assumed responsibility for ensuring the long-term sustainability and continued operation of radiation portal monitors and X-ray vans equipped with radiation detectors that State and other U.S. agencies provided to 23 countries. Through this agreement, DOE provides spare parts, preventative maintenance, and repairs for the equipment through regularly scheduled maintenance visits. Since 2002, DOE has maintained this equipment but has not upgraded any of it, with the exception of at one site in Azerbaijan. According to DOE officials, new implementing agreements with the appropriate ministries or agencies within the governments of each of the countries where the old equipment is located are needed before DOE can install more sophisticated equipment. Third, since 2002, DOE has been responsible for maintaining certain radiation detection equipment previously deployed by State and other agencies in 23 countries. However, DOE is not responsible for maintaining handheld radiation detection equipment provided by these agencies. As a result, many pieces of handheld equipment, which are vital for border officials to conduct secondary inspections of vehicles or pedestrians, may not function properly. For example, in Georgia, we observed border guards performing secondary inspections with a handheld radiation detector that had not been calibrated (adjusted to conform with measurement standards) since 1997. According to the detector’s manufacturer, yearly recalibration is necessary to ensure that the detector functions properly. Finally, many border sites are located in remote areas that often do not have access to reliable supplies of electricity, fiber optic lines, and other infrastructure essential to operate radiation detection equipment and associated communication systems. Additionally, environmental conditions at some sites, such as extreme heat, can affect the performance of equipment. To mitigate these concerns, DOE, DOD, and State have provided generators and other equipment at remote border sites to ensure stable supplies of electricity and, when appropriate, heat shields or other protection to ensure the effectiveness of radiation detection equipment. We also reported that State’s ability to carry out its role as lead interagency coordinator of U.S. radiation detection equipment assistance has been limited by deficiencies in its strategic plan for interagency coordination and by its lack of a comprehensive list of all U.S. radiation detection equipment assistance. In response to a recommendation we made in 2002, State led the development of a governmentwide plan to coordinate U.S. radiation detection equipment assistance overseas. This plan broadly defines a set of interagency goals and outlines the roles and responsibilities of participating agencies. However, the plan lacks key components, including overall program cost estimates, projected time frames for program completion, and specific performance measures. Without these elements in the plan, State will be limited in its ability to effectively measure U.S. programs’ progress toward achieving the interagency goals. Additionally, in its role as lead interagency coordinator, State has not maintained accurate information on the operational status and location of all radiation detection equipment provided by U.S. programs. While DOE, DOD, and State each maintain lists of radiation detection equipment provided by their programs, they do not regularly share such information, and no comprehensive list of all equipment provided by U.S. programs exists. For example, according to information we received from program managers at DOE, DOD, and State, more than 7,000 pieces of handheld radiation detection equipment had been provided to 36 foreign countries through the end of fiscal year 2005. Because much of this equipment was provided to the same countries by multiple agencies and programs, it is difficult to determine the degree to which duplication of effort has occurred. Without a coordinated master list of all U.S.-funded equipment, program managers at DOE, DOD, and State cannot accurately assess if equipment is operational and being used as intended, determine the equipment needs of countries where they plan to provide assistance, or detect whether an agency has unknowingly supplied duplicative equipment. Through December 2005, DHS had installed about 670 radiation portal monitors nationwide— about 22 percent of the portal monitors DHS plans to deploy—at international mail and express courier facilities, land border crossings, and seaports in the United States. DHS has completed portal monitor deployments at international mail and express courier facilities and the first phase of northern border sites—57 and 217 portal monitors, respectively. In addition, by December 2005, DHS had deployed 143 of 495 portal monitors at seaports and 244 of 360 at southern border sites. As of February 2006, CBP estimated that, with these deployments, it has the ability to screen about 62 percent of all containerized shipments entering the United States (but only 32 percent of all containerized seaborne shipments) and roughly 77 percent of all private vehicles. DHS plans to deploy 3,034 portal monitors by September 2009 at a cost of $1.3 billion. However, the final costs and deployment schedule are highly uncertain because of delays in releasing appropriated funds to contractors, difficulties in negotiating with seaport operators, and uncertainties in the type and cost of radiation detection equipment DHS plans to deploy. Further, to meet this goal, DHS would have to deploy about 52 portal monitors a month for the next 4 years—a rate that far exceeds the 2005 rate of about 22 per month. In particular, several factors have contributed to the delay in the deployment schedule. First, DHS provides the Congress with information on portal monitor acquisitions and deployments before releasing any funds. However, DHS’s cumbersome review process has consistently caused delays in providing such information to the Congress. For example, according to the House Appropriations Committee report on DHS’s fiscal year 2005 budget, CBP should provide the Congress with an acquisition and deployment plan for the portal monitor program prior to funding its contractors. This plan took many months to finalize, mostly because it required multiple approvals within DHS and the Office of Management and Budget prior to being submitted to the Congress. The lengthy review process delayed the release of funds and, in some cases, disrupted and delayed deployment. Second, difficult negotiations with seaport operators about placement of portal monitors and screening of railcars have delayed deployments at U.S. seaports. Many seaport operators are concerned that radiation detection equipment may inhibit the flow of commerce through their ports. In addition, seaports are much larger than land border crossings, consist of multiple terminals, and may have multiple exits, which may require a greater number of portal monitors. Further, devising an effective way to conduct secondary inspections of rail traffic as it departs seaports without disrupting commerce has delayed deployments. This problem may worsen because the Department of Transportation has forecast that the use of rail transit out of seaports will probably increase in the near future. Finally, DHS’s $1.3 billion estimate for the project is highly uncertain, in part, because of uncertainties in the type and cost of radiation detection equipment that DHS plans to deploy. The estimate is based on DHS’s plans for widespread deployment of advanced technology portal monitors, which are currently being developed. However, the prototypes of this equipment have not yet been shown to be more effective than the portal monitors now in use, and DHS officials say they will not purchase the advanced portal monitors unless they are proven to be clearly superior. Moreover, when advanced technology portal monitors become commercially available, experts estimate that they will cost between about $330,000 and $460,000 each, far more than the currently used portal monitors whose costs range from about $49,000 to $60,000. Even if future test results indicate better detection capabilities, without a detailed comparison of the two technologies’ capabilities it would not be clear that the dramatically higher cost for this new equipment would be worth the investment. We also identified potential issues with the procedures CBP inspectors use to perform secondary inspections that, if addressed, could strengthen the nation’s defenses against nuclear smuggling. For example, CBP’s procedures require only that officers locate, isolate, and identify radiological material. Typically, officers perform an external examination by scanning the sides of cargo containers with handheld radiation detection equipment during secondary inspections. CBP’s guidance does not specifically require officers to open containers and inspect their interiors, even when their external examination cannot unambiguously resolve the alarm. However, under some circumstances, opening containers can improve security by increasing the chances that the source of radioactivity that originally set off the alarm will be correctly located and identified. The second potential issue with CBP’s procedures involves NRC documentation. Individuals and organizations shipping radiological materials to the United States must generally acquire a NRC license, but according to NRC officials, the license does not have to accompany the shipment. Although inspectors examine such licenses when these shipments arrive at U.S. ports of entry, CBP officers are not required to verify that shippers of radiological material actually obtained required licenses and to authenticate licenses that accompany shipments. We found that CBP inspectors lack access to NRC license data that could be used to authenticate a license at the border. This concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841 or at aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. R. Stockton Butler, Nancy Crothers, Jim Shafer, and Eugene Wisnoski made key contributions to this statement. Combating Nuclear Smuggling: DHS Has Made Progress in Deploying Radiation Detection Equipment at U.S. Ports of Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Nuclear Nonproliferation: U.S. Efforts to Combat Nuclear Smuggling. GAO-02-989T Washington, D.C.: July 30, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO-02-426. Washington, D.C.: May 16, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO is releasing two reports today on U.S. efforts to combat nuclear smuggling in foreign countries and in the United States. Together with the March 2005 report on the Department of Energy's Megaports Initiative, these reports represent GAO's analysis of the U.S. effort to deploy radiation detection equipment worldwide. In my testimony, I will discuss (1) the progress made and challenges faced by the Departments of Energy (DOE), Defense (DOD), and State in providing radiation detection equipment to foreign countries and (2) the Department of Homeland Security's (DHS) efforts to install radiation detection equipment at U.S. ports of entry and challenges it faces. Regarding the deployment of radiation detection equipment in foreign countries, DOE, DOD, and State have spent about $178 million since fiscal year 1994 to provide equipment and related training to 36 countries. For example, through the end of fiscal year 2005, DOE's Second Line of Defense program had completed installation of equipment at 83 sites, mostly in Russia. However, these agencies face a number of challenges that could compromise their efforts, including corruption of foreign border security officials, technical limitations and inadequate maintenance of some equipment, and the lack of supporting infrastructure at some border sites. To address these challenges, U.S. agencies plan to take a number of steps, including combating corruption by installing multitiered communications systems that establish redundant layers of accountability for alarm response. State coordinates U.S. programs to limit overlap and duplication of effort. However, State's ability to carry out this role has been limited by deficiencies in its interagency strategic plan and its lack of a comprehensive list of all U.S. radiation detection equipment provided to other countries. Domestically, DHS had installed about 670 radiation portal monitors through December 2005 and provided complementary handheld radiation detection equipment at U.S. ports of entry at a cost of about $286 million. DHS plans to install a total of 3,034 radiation portal monitors by the end of fiscal year 2009 at a total cost of $1.3 billion. However, the final costs and deployment schedule are highly uncertain because of delays in releasing appropriated funds to contractors, difficulties in negotiating with seaport operators, and uncertainties in the type and cost of radiation detection equipment DHS plans to deploy. Overall, GAO found that U.S. Customs and Border Protection (CBP) officers have made progress in using radiation detection equipment correctly and adhering to inspection guidelines, but CBP's secondary inspection procedures could be improved. For example, GAO recommended that DHS require its officers to open containers and inspect them for nuclear and radioactive materials when they cannot make a determination from an external inspection and that DHS work with the Nuclear Regulatory Commission (NRC) to institute procedures by which inspectors can validate NRC licenses at U.S. ports of entry.
This section discusses DOE’s missions and spending, contract types, contract oversight, IPERA risk assessment and IG requirements, and the roles and responsibilities of organizations involved in DOE’s IPERA activities. DOE’s missions include developing, maintaining, and securing the nation’s nuclear weapons capability; cleaning up the environmental damage resulting from more than 60 years of producing nuclear weapons; and conducting basic energy and science research and development. The department carries out these diverse missions at 85 different sites across the country, including major laboratories and field facilities. With a DOE workforce of about 15,000 employees and in excess of 100,000 contractor staff, the department relies primarily on its contractors to manage and operate its sites and accomplish its missions. DOE oversees the work of its contractors through its staff and program offices at DOE headquarters and its field offices. For example, DOE contracting officers provide oversight and ensure contractors are in compliance with the terms of their contracts. In fiscal year 2013, DOE spent about 90 percent of its total annual budget, or $24 billion of $26.4 billion, on contracts. A significant share of this spending, about $17.1 billion in fiscal year 2013, was for management and operating (M&O) contracts, which are used by DOE generally for the purposes of managing DOE laboratories and other government-owned or government-controlled facilities. DOE’s M&O contracts, among other things, provide contractors with the authority to draw money directly from DOE-funded accounts to pay for contract performance. In contrast, for the less common non-M&O contracts, DOE relies on more traditional bill payment methods—which include receipt of an invoice, payment approval and authorization, and disbursement of funds. In addition to conducting work through its contractors, DOE manages a number of grant and loan programs—which accounted for about $2.4 billion of DOE spending in fiscal year 2013. DOE also includes the Federal Energy Regulatory Commission and the Power Marketing Administrations. Federal agencies can choose among a number of different types of contracts to procure goods and services, including fixed-price, time-and- materials, and cost-reimbursement contracts. The choice of contract type is a principal means for agencies to divide the risk of cost overruns between the government and the contractor. For example, under a firm- fixed-price contract, the contractor assumes most of the cost risk; by accepting responsibility for completing a specified amount of work for a fixed price, the contractor earns a profit if the total costs it incurs in performing the contract are less than the contract price, but loses money if its total costs exceed the contract price. Under a time-and-materials contract, by contrast, the government bears the risk of cost overruns because payment is based on the number of labor hours billed at a fixed hourly rate that includes wages, overhead, general administrative costs, profit, and the costs of materials if applicable. However, time-and- materials contracts include a ceiling price that the contractor exceeds at its own risk, meaning there is no guarantee that costs above the ceiling price will be reimbursed by the government. Under cost-reimbursement types of contracts, the government assumes the cost risk because it pays the contractor’s allowable costs incurred, to the extent prescribed by the contract, although these contracts also establish a ceiling that the contractor exceeds at its own risk. In fiscal year 2013, about 90 percent, or $21.7 billion, of DOE’s total contract spending was on cost-reimbursement type contracts that include contractor fees, according to DOE officials. This includes cost-plus-fixed- fee, cost-plus-incentive-fee, and cost-plus-award-fee contracts. Under these types of contracts, the federal agency reimburses a contractor for all allowable costs and also pays a fee that is either fixed at the outset of the contract or adjustable based on objective or subjective performance criteria set out in the contract. Cost-reimbursement types of contracts place the primary risk of cost overruns on the government because of the potential for the government to pay more than the contract’s estimated cost and because the government must reimburse the contractor’s costs of performance up to the contract cost ceiling regardless of whether the end item or service is completed. In a September 2009 report, we concluded that cost-reimbursement types of contracts are suitable only when the agency’s requirements cannot be defined sufficiently or the cost of the work cannot be estimated with sufficient accuracy to allow for the use of any type of fixed-price contract. Cost-reimbursement type contracts allow the agency to contract for work that might otherwise present too great a risk to contractors. The choice of a contract type—and whether the contract is an M&O contract or not—will also affect the types of internal control and contract auditing activities needed to help protect the government’s interests and reduce the risk of improper payments. Under federal standards for internal control, control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Control activities include both preventive and detective controls. Preventive controls—such as invoice review prior to payment—are controls designed to prevent improper payments (errors and fraud), waste, and mismanagement. Detective controls—such as incurred cost audits—are designed to identify errors or improper payments after the payment is made. Incurred cost audits are intended to examine contractors’ cost representations and reach an opinion on whether the costs are allowable, allocable to government contracts, and reasonable in accordance with the contract and applicable government acquisition regulations. We have previously concluded that a sound system of internal controls contains a balance of both preventive and detective controls that is appropriate for the agency’s operations. DOE’s contracting activities for both M&O and non-M&O contracts are governed by federal law and regulations, including the Federal Acquisition Regulation as supplemented by the Department of Energy Acquisition Regulation. The contracting cycle consists of activities throughout the acquisition process, including preaward, award, and contract administration and management. Prior to contract award, an agency generally identifies a need and develops a requirements package. Under the Federal Acquisition Regulation, the agency generally determines the method of acquisition; solicits and evaluates bids or proposals; determines the adequacy of the contractor’s accounting system; and ultimately negotiates a price and contract terms, resulting in the contract award. After contract award, the Federal Acquisition Regulation generally requires the agency to perform activities related to contract administration and management, which involves monitoring the contractor’s performance, as well as reviewing and approving (or disapproving) the contractor’s requests for payments. Contract auditing assists in achieving prudent contracting by providing those responsible for government procurement with financial information and advice relating to contractual matters and the effectiveness, efficiency, and economy of contractors’ operations. Depending on the contract type, various contract audit activities can occur in the preaward, award, and administration and management phases of a contract. For example, before awarding a cost-reimbursement or other non-fixed-price type contract, the Federal Acquisition Regulation requires agency contracting officers to determine the adequacy of a contractor’s accounting system. After certain types of contracts are awarded, contract audits—including incurred cost audits—are intended to be a key control to help ensure that contractors are charging the government in accordance with applicable laws, regulations, and contract terms. At DOE, the requirements and responsibility for conducting contract and other audits— including incurred cost audits and audits of subcontractor costs—vary, depending on whether the contract is an M&O or a non-M&O type contract, as follows: M&O contracts. In its M&O contracts, DOE does not require contractors to submit invoices; instead, the agency provides contractors with the authority to draw funds directly from federal accounts to pay for contract performance. Therefore, DOE does not rely on traditional invoice reviews prior to payment as a means of helping prevent improper payments. Instead, DOE relies on a combination of audits of contractor accounting systems and certain detective controls. Specifically, using a process known as the Cooperative Audit Strategy, DOE relies on its M&O contractors to perform the audit work necessary to ensure that their accounting systems are adequate and that they are charging DOE for only those costs that are allowable under the contract. As part of DOE’s Cooperative Audit Strategy, M&O contractors are required to maintain an internal audit organization that is responsible for performing operational and financial audits, including incurred cost audits, and assessing the adequacy of management control systems.addition, M&O contractors are required to provide adequate audit coverage of subcontractors where costs incurred are a factor in In determining the amount payable.to submit an annual Statement of Costs Incurred and Claimed that includes the contractor’s certification that the costs claimed represent allowable contract costs. To support this statement, the contractors’ internal audit organization conducts an annual incurred cost audit. Among other things, in conducting the annual incurred cost audit, the internal auditors are expected to develop a sampling methodology that will allow them to test selected transactions to determine whether the associated costs are allowable under the contracts’ terms and to make projections regarding the total amount of unallowable costs based on the testing results. According to DOE’s Financial Management Handbook, under the Cooperative Audit Strategy, DOE’s IG is required to annually perform an assessment of these statements for the 10 M&O contractors who incurred and claimed the most costs annually. For the remaining M&O Statements of Costs Incurred and Claimed, the IG is required to perform assessments on a rotational basis, meaning the IG reviews a few each year until it completes all of the remaining ones and then starts over again. DOE officials cite the Cooperative Audit Strategy as a key internal control. M&O contractors are also required Non-M&O contracts. Non-M&O contractors do not fall under DOE’s Cooperative Audit Strategy and therefore are not required to submit an annual Statement of Costs Incurred and Claimed, maintain an internal audit organization, or provide audit coverage of subcontracts. Instead, DOE relies on traditional bill payment methods, which include prepayment review of invoices, for its non-M&O contracts. DOE also relies on contract audits—including incurred cost audits—for detecting improper payments. The Defense Contract Audit Agency (DCAA) has traditionally been the primary auditor for non-M&O contracts— performing preaward and annual incurred cost audits to ensure that non-M&O contractor costs are allowable under the contract. According to DOE’s acquisition guide, the majority of DOE’s contract dollars have traditionally been spent on M&O contracts, and DCAA services were used for the few other DOE contracts that were typically of small dollar value.its use of non-M&O contracts. More recently, however, DOE has expanded Regardless of the approach used, DOE contracting officers are responsible for determining whether costs incurred are allowable under the contract. During the course of conducting incurred cost audits, auditors sometimes question the allowability of certain costs. Based on this information, contracting officers may eventually decide to disallow certain costs. Before moving to disallow costs, however, the Federal Acquisition Regulation requires agencies to “make every reasonable effort” to reach a satisfactory settlement with the contractor. Under IPERA and OMB’s implementing guidance, which together provide the specific requirements for assessing and reporting on improper payments,activities that they administer and identify any program that may be susceptible to significant improper payments—a process known as a risk assessment. Agencies must institute a systematic method of reviewing and assessing their programs, which may take the form of either a quantitative analysis based on a statistical sample or a qualitative evaluation. federal agencies are required to review all programs and IPERA requires that agencies, in performing their risk assessments, take into account those risk factors that are likely to contribute to significant improper payments, such as 1. whether the program or activity reviewed is new to the agency; 2. the complexity of the program or activity reviewed, particularly with respect to determining correct payment amounts; 3. the volume of payments made annually; 4. whether payments or payment eligibility decisions are made outside of the agency, for example, by a state or local government, or a regional federal office; 5. recent major changes in program funding, authorities, practices, or 6. the level, experience, and quality of training for personnel responsible for making program eligibility determinations or certifying that payments are accurate; and 7. significant deficiencies in the audit reports of the agency including but not limited to the agency Inspector General or the Government Accountability Office report audit findings or other relevant management findings that might hinder accurate payment certification. OMB’s implementing guidance added an eighth risk factor, directing agencies to consider the results from prior improper payment work. For the purposes of this report, we will refer to these as the eight risk factors. It is important to note that these eight risk factors do not necessarily represent all of the risks for improper payments across all federal agency programs. OMB’s guidance describes these risk factors as the minimum that agencies should consider. Under IPERA, an agency’s assessment of risk factors likely to contribute to significant improper payments may include other risk factors, as appropriate, specific to the program or activity being assessed. We have reported on the importance of risk assessments for managing improper payments and best practices for conducting them. As described in our executive guide for helping agencies identify effective strategies to manage improper payments in their programs,comprehensive review and analysis of program operations to determine if risks exist and the nature and extent of the risks identified. The information an agency develops during a risk assessment forms the foundation or basis upon which agency management can determine the nature and type of corrective actions needed, and it gives management baseline information for measuring progress in reducing improper payments. In addition, reducing improper payments, according to our executive guide, requires a strategy appropriate to the organization involved and its particular risks. a risk assessment is a Under IPERA, agencies were required to conduct risk assessments for all federal programs and activities in fiscal year 2011 and at least once every 3 years thereafter for programs and activities deemed not risk susceptible. As discussed previously, DOE reported in fiscal year 2011 that it did not have any programs susceptible to significant improper payments. However, we note that, in fiscal years 2012 and 2013, the department elected to conduct certain risk assessment related activities that were not required under IPERA. Under IPERA, if, in its risk assessment, an agency finds that a program is susceptible to significant improper payments, the agency must conduct annual statistical sampling of payment transactions to estimate improper payments, publicly report the results, and implement corrective actions to reduce future improper payments. Because DOE reported in fiscal years 2011 through 2013 that none of its programs were susceptible to significant improper payments, under IPERA, the department was not required to take these additional steps. Under IPERA, however, all agencies are required to identify and recover improper overpayments by conducting recovery audits, also known as payment recapture audits, for agency programs that expend $1 million or more annually, if such audits would be cost-effective. OMB requires agencies, including DOE, to report annually on their recovery auditing efforts in their Performance and Accountability Reports or their Agency Financial Reports. Additionally, IPERA requires that each fiscal year, as first implemented in fiscal year 2011, the IG of each agency determine whether the agency is in compliance with certain criteria in IPERA and submit a report on that Specifically, IGs determination to the head of the agency and others.are to determine whether agencies 1. published a Performance and Accountability Report or Agency Financial Report for the most recent fiscal year and posted that report and any accompanying materials required by OMB on the agency website; 2. conducted a program-specific risk assessment for each program or activity that conforms with IPERA (if required); 3. published improper payment estimates for all programs and activities identified as susceptible to significant improper payments under its risk assessment (if required); 4. published programmatic corrective action plans in the Performance and Accountability Report or Agency Financial Report (if required); 5. published, and has met, annual reduction targets for each program assessed to be at risk and measured for improper payments; 6. reported a gross improper payment rate of less than 10 percent for each program and activity for which an improper payment estimate was obtained and published in the Performance and Accountability Report or Agency Financial Report; and 7. reported information on its efforts to recapture improper payments. In its fiscal year 2011 report on IPERA compliance, DOE’s IG reported that the department had not met the OMB criteria in its implementation guidance for compliance under IPERA. Among other things, the IG reported that DOE, in its review of programs to determine whether any might be susceptible to significant improper payments, had inconsistently executed its risk assessments. The IG recommended, among other things, that DOE implement policies and procedures to ensure oversight and communication of the application of the improper payment definition by its sites and adherence to the prescribed guidance. DOE concurred with this recommendation. In subsequent reports on IPERA compliance for fiscal years 2012 and 2013, the IG found that DOE had complied with all requirements of IPERA. DOE’s Office of the CFO, hereafter referred to as the DOE headquarters CFO, is responsible for issuing IPERA guidance and consolidating and reporting improper payments information annually in DOE’s Agency Financial Report. DOE’s contractors, along with other DOE field office staff, provide information that is the basis for DOE’s IPERA risk assessment and reporting activities. In addition to having contractor internal auditors, DOE has M&O contractor CFOs who play a role in assessing risk and reporting improper payment information. Generally, contractor CFOs assist in preparing the payment sites’ risk assessment and improper payment data. DOE’s 11 field CFOs, in cooperation with DOE contracting officers, are responsible for overseeing contactor and other activities in the field and assist DOE’s headquarters CFO in implementing IPERA requirements. DOE developed a process to assess its programs for risks of improper payments in fiscal year 2011 that included both a qualitative risk assessment and quantitative information on improper payments. However, based on our evaluation of the department’s fiscal year 2011 risk assessment process, we found that DOE did not prepare risk assessments for all of its programs, and the quantitative information reported was not reliable; DOE’s risk assessments did not always include a clear basis for the risk determination; and DOE’s risk assessments did not fully evaluate other relevant risk factors. In addition, because DOE found its programs to be at low risk for significant improper payments in fiscal year 2011, the department was not required to prepare risk assessments again until fiscal year 2014. In fiscal years 2012 and 2013, although not required, DOE directed its sites to prepare an overall risk assessment rating and information on the amount of actual improper payments identified through the normal course of business. However, we found that the information reported in fiscal years 2012 and 2013 constituted less information on improper payments risk than what was provided in fiscal year 2011, and the information reported provided limited insight into the risk of improper payments. To comply with IPERA, DOE developed a process in fiscal year 2011 to assess its programs’ risks for improper payments. DOE defined its programs as including both the sites responsible for making payments on behalf of DOE (hereafter referred to as payment sites) and its grant and loan programs. Specifically, in 2011, DOE identified 55 payment sites as programs. Of those sites, 38 were contractor sites, which include sites such as DOE laboratories, weapons production facilities and major cleanup sites. The remaining 17 payment sites were managed by DOE. These sites include local DOE site offices and the Oak Ridge Financial Service Center (collectively referred to as DOE field office sites); the department’s four Power Marketing Administrations; and the Federal Energy Regulatory Commission. To aid in its compliance with IPERA, DOE issued guidance in fiscal year 2011 that directed payment sites to (1) develop a site-specific risk assessment that takes into account, at a minimum, the eight risk factors, (2) prepare a statistically valid estimate of the annual amount of improper payments, and (3) submit a copy of the risk assessment and improper payments estimate to the DOE headquarters CFO. DOE’s fiscal year 2011 guidance did not specify who would be responsible for evaluating the risks of DOE’s grant and loan programs, but DOE officials told us that DOE headquarters was responsible for performing this function. DOE officials told us that under this process, cognizant DOE field CFO offices reviewed payment site risk assessments before they were submitted to the headquarters CFO. Based on the risk assessments and statistical sampling information that payment sites submitted to the headquarters CFO, DOE determined in 2011 that it did not have any programs susceptible to significant improper payments. Additionally, DOE reported in its Fiscal Year 2011 Agency Financial Report that its estimate of the annual amount of improper payments from statistical sampling was $17.5 million out of $31.2 billion in total outlays, which represents an overall improper payment rate of .06 percent. DOE did not prepare risk assessments for nearly half of its payment sites for fiscal year 2011, and the quantitative information that payment sites reported for improper payments was not reliable. In addition, DOE did not prepare risk assessments for its grant and loan programs for fiscal year 2011. We found that 26 of the 55 payment sites that DOE had designated as programs for fiscal year 2011 did not prepare risk assessments. Of these 26 sites, 11 sites did not submit either a qualitative assessment or quantitative information, and 15 submitted quantitative information on the site’s estimated amount of improper payments but did not provide a qualitative assessment of risk, as required by DOE guidance. IPERA requires federal agencies to assess the risk of all programs for significant DOE had a process and guidance in place for improper payments.conducting risk assessments, and DOE’s fiscal year 2011 guidance directed each payment site to complete a risk assessment that, at a minimum, considered the eight risk factors. DOE’s guidance also states that each site will provide a copy of the risk assessment to the DOE headquarters CFO to support their conclusions. However, 26 sites did not prepare and submit risk assessments as required (i.e.,10 non-M&O contractor payment sites, 11 DOE field office sites, and 5 M&O contractor sites). DOE officials said the 10 non-M&O payment sites did not prepare risk assessments for fiscal year 2011 because they were covered as part of the risk assessments conducted by the cognizant DOE field office that year. In reviewing risk assessments, we found that 3 of the 10 non-M&O payment sites were discussed in the assessment by a cognizant DOE field office site—the Richland Office. However, the discussion of the non- M&O sites did not constitute a risk assessment for those sites because the Richland Office only made limited mention of the internal controls used by these 3 non-M&O sites, rather than a more robust assessment of the risk factors. Moreover, we found no evidence that the remaining 7 non-M&O sites were assessed by the cognizant field office site—in part, because many of the other cognizant field office sites did not prepare risk assessments in 2011. DOE officials told us that the Oak Ridge Office, which prepared a risk assessment in 2011, was the cognizant DOE field office that covered the risk assessments for some of the non-M&O contracts. However, we found that its risk assessment did not address the eight risk factors as they relate to the specific payment processes and For example, at the time of controls at the non-M&O contractor sites.the fiscal year 2011 reporting, the Oak Ridge payment site oversaw USA Repository Services LLC, a non-M&O payment site, but the Oak Ridge risk assessment does not mention the contractor or discuss any of the processes and controls specific to that contractor. Assessing risk at the non-M&O contractors is important because many of the prepayment review processes and controls that impact the risk associated with making an improper payment reside at the non-M&O contractor site. For example, upon receipt of an invoice, DOE officials at the non-M&O site are responsible for verifying that the goods and services reflected on the invoice have been received. Regardless of whether the cognizant DOE field site’s risk assessment covered these non-M&O contractors, not having completed risk assessments for these non-M&O contractor sites limited the information DOE needed to assess the risk for all of its programs. For the 11 DOE field office sites that did not prepare and submit risk assessments as required, DOE officials said that the 11 sites did not have to prepare risk assessments. Absent their inclusion in a risk assessment prepared for some other program or activity within DOE, this statement is not consistent with IPERA, and again not having completed risk assessments for these 11 field sites limited the information DOE needed to assess the risk for all of its programs. DOE officials further explained that they believe the 5 M&O contractor sites did prepare risk assessments for fiscal year 2011, but the DOE officials were unable to locate those risk assessments in their files. As discussed later in this report, in fiscal year 2012, all but 4 payment sites prepared and submitted risk assessment ratings and, in fiscal year 2013, all payment sites prepared and submitted risk assessment ratings. In July 2014, DOE issued its IPERA risk assessment guidance for fiscal year 2014 with a number of revisions. One revision directs DOE field office sites to consider the payment processes of the non-M&O contractors they oversee when completing required risk assessments. However, the guidance does not specify that those sites should address the eight risk factors as they relate to the non-M&O sites. Without directing field office sites in guidance to address the eight risk factors as they relate to the non-M&O contractor risk assessments, the sites cannot fully assess the risk of improper payments, and DOE cannot fully understand its risks for improper payments and take corrective actions to mitigate such risks in the future. The quantitative information on the amount of improper payments DOE reported in its Fiscal Year 2011 Agency Financial Report was not reliable because it was not complete and did not match the total information submitted by payment sites. As discussed previously, DOE determined for 2011 that it did not have any programs susceptible to significant improper payments based on both the qualitative risk assessments prepared by the payment sites as well as the statistical sampling information that some payment sites submitted to the headquarters CFO. DOE reported in its Fiscal Year 2011 Agency Financial Report that its estimate of the annual amount of improper payments from statistical sampling was $17.5 million out of $31.2 billion in total outlays. However, our review could not verify the accuracy of the $17.5 million reported for two reasons. First, 13 payment sites did not submit to DOE quantitative information on their estimated improper payments or their outlays, so the information reported was incomplete. Second, for payment sites that submitted their information to DOE, the totals for the quantitative information submitted did not equal the amount reported in DOE’s Agency Financial Report. In addition, we did not evaluate the sampling methodology that DOE used to estimate its improper payments in fiscal year 2011 because the DOE IG previously reported on this issue and found problems with DOE’s methodology. In its fiscal year 2011 report on IPERA compliance, the DOE IG found that DOE used a nonstatistical sampling method to arrive at its estimated improper payment rate. The IG recommended that DOE develop a system of controls to help ensure the sampling methodologies used at the sites align with the methodology required in the department’s IPERA reporting guidance. At that time, DOE concurred with the recommendation. However, according to DOE officials, DOE decided not to conduct statistical sampling in later years because IPERA does not require that agencies perform statistical sampling as part of a risk assessment. DOE did not prepare required risk assessments for its grant and loan programs for fiscal year 2011. As discussed previously, DOE officials told us that DOE headquarters was responsible for evaluating the risks of its grant and loan programs for improper payments for 2011. However, DOE headquarters officials told us that they did not prepare the required risk assessments for these programs for 2011. DOE headquarters officials said they did not conduct a risk assessment on grant programs for 2011 because they were awaiting more detailed guidance from OMB on how to assess grant programs under IPERA—specifically, whether to assess risk at the primary or the subrecipient level. In terms of the loan programs, DOE officials said that they held discussions with OMB and identified strong financial controls and oversight associated with the Federal Financing Bank that administers DOE’s loan payments and determined that the existence of these controls provided a low risk of improper payments in the loans area. Therefore, DOE officials concluded that a separate risk assessment for loans was not warranted for fiscal year 2011. However, DOE did not provide documentation to support this conclusion. Moreover, this is inconsistent with IPERA and OMB’s implementing guidance, which requires federal agencies to review all programs for significant improper payments, and DOE’s 2011 guidance, which directs each payment site to complete a risk assessment. In July 2014, DOE issued its IPERA risk assessment guidance for fiscal year 2014 with a number of revisions. One revision directs payment sites with cognizance over grants to report their known improper grant payments. Another revision directs DOE’s Loan Guarantee Program Office to prepare a risk assessment for DOE’s loan programs. In August 2014, DOE officials told us that cognizant payment sites will now be responsible for considering grant payments in their risk assessments, and that payment sites and the DOE Loan Office will explicitly address the risk factors for grant and loan programs, respectively. If implemented effectively, this revision to DOE’s guidance could address our findings related to DOE not fully assessing its grant and loan programs. DOE’s fiscal year 2011 risk assessments did not always include a clear basis for their risk determinations. For the 29 payment sites that prepared risk assessments for fiscal year 2011, we analyzed them to determine whether the risk assessments took into account the eight risk factors. Based on our analysis of the risk assessment documentation provided, we found that some payment sites did not take into account the eight risk factors. For those that did, the support for their conclusions varied widely, and some assessments did not contain enough information for us to determine how the payment sites arrived at their risk determination. Based on our analysis, we determined that at least 6 of the 29 sites that prepared risk assessments did not take into account the eight risk factors, making the basis of their risk assessment determinations unclear. For example, one site’s risk assessment did not address the eight factors and instead noted that it conducted a 100 percent payment review for all invoices and thus determined that its risk of improper payments was low. However, the risk assessment did not provide any information as to the results of its invoice reviews. In another instance, a site’s risk assessment consisted of two sentences noting that its account volume of payments had not changed significantly and that its funding, authorities, practices, and procedures, as well as the level and quality of training of its personnel had not changed significantly. Based on this, the site concluded it had a low amount of improper payments and had controls in place to identify and record them. In a third instance, a site’s risk assessment contained information on its internal controls indicating that many of its payment processes were high risk. Specifically, this risk assessment rated each of the subprocesses associated with payroll administration, payables management, and travel administration as high or medium risk. For example, under the payables management subprocess, some of the high-risk areas that were noted include the unauthorized approval of invoices; payments made without an approved invoice; and invalid payees established in the payee data file. Nonetheless, this site concluded that its risk of improper payments was low, but it provided no additional clarification on how it arrived at this conclusion. Through our analysis, we also determined that at most the 23 remaining payment sites submitted risk assessments that took into account the eight risk factors. However, support for their conclusions varied widely, and some assessments did not contain enough information for us to determine how the payment sites arrived at their risk determination, raising questions about who at DOE was responsible for reviewing and approving risk assessments for consistency. DOE’s guidance directs its sites to submit a risk assessment to DOE headquarters that incorporates the eight factors in support of their risk determination. However, its guidance does not provide further direction on what should be provided in the assessment to address each risk factor. DOE officials told us that they left it up to the payment sites to determine how to address the eight risk factors. As a result, we found that the support provided to address each risk factor was inconsistent, ranging from several paragraphs of narrative to one sentence answers or “yes or no” responses. In some cases, we could not determine how payment sites considered the eight risk factors to arrive at a risk determination. For example, in one case, the risk assessment was a table populated with a designation of high, medium, or low for each of the eight risk factors by specific payment functions, such as accounts payable, travel, and payroll. In this example, it was not clear how the site arrived at the risk designations for each of the specific payment functions or how the site weighted each risk designation to arrive at a risk determination for the program. DOE’s fiscal year 2011 IPERA guidance directed each site to provide a copy of the risk assessment to support its risk designation, but it did not specify how sites were to document the basis for their risk determinations. Under the Standards for Internal Control in the Federal Government, internal controls and all transactions and other significant events need to be clearly documented. Based on our review of DOE’s risk assessments, the documentation they contained did not always provide a clear basis for the risk determinations. Instead, like the discussion of risk factors, the support for risk determinations was inconsistent, ranging from several paragraphs of narrative to mere designations of high, medium, or low risk with no accompanying documentation. Absent clarification in guidance of how payment sites are to address the eight risk factors and document the basis for their risk rating determinations, DOE personnel may not have a consistent understanding of how to complete risk assessments. In addition, DOE’S fiscal year 2011 IPERA guidance did not specify who at DOE was responsible for reviewing and approving risk assessments for consistency with IPERA requirements and OMB and DOE guidance. Under the federal standards for internal control, federal agencies are to employ internal control activities, such as reviews by management at the functional or activity level, and such activities include approvals, authorizations, verifications, and reconciliations. As previously mentioned, DOE officials told us that cognizant DOE field CFOs reviewed payment site risk assessments. However, given the level of inconsistency we found in our review of payment site risk assessments, it is unclear who was reviewing the assessments. Without clarifying in guidance who at DOE is responsible for reviewing and approving risk assessments for consistency across sites, DOE may not have reasonable assurance that the assessments are receiving the same level of oversight at each site. As discussed previously, DOE issued new IPERA risk assessment guidance in July 2014 with a number of revisions. Among other things, these revisions are aimed at addressing inconsistencies in the risk assessments. One revision directs payment sites to include a brief explanation for each risk factor. DOE officials also told us in August 2014 that their IPERA training covers how payment sites are to perform risk assessments. However, the 2014 guidance does not specify how payment sites should address each factor and what documentation they are to include in support of their risk determinations, which is inconsistent with federal standards for internal control. As mentioned earlier, without clarifying in guidance how payment sites are to address the eight risk factors and document the basis for their risk rating determinations, DOE cannot be assured that its personnel have a consistent understanding of how to complete risk assessments. The 2014 guidance also does not clarify who at DOE is responsible for reviewing and approving risk assessments for consistency. Also mentioned earlier, without clarifying in guidance who at DOE is responsible for reviewing and approving risk assessments consistent with federal standards for internal control, DOE may not have reasonable assurance that the assessments are receiving the same level of oversight at each site. In addition, while DOE provided training for its payment sites for its fiscal year 2011 IPERA reporting, given the number of deficiencies we identified with that process, clarifying the guidance could help prevent inconsistencies in future risk assessments. DOE’s risk assessments did not fully evaluate other relevant risk factors. As previously stated, the eight risk factors do not necessarily represent all of the risks for improper payments across all federal agency programs, and OMB’s guidance describes these risk factors as the minimum that agencies should consider. DOE’s 2011 IPERA guidance requires that programs consider, at a minimum, the eight risk factors, but it does not require programs to consider other factors that are specific to the program being assessed. In particular, DOE’s guidance does not require programs to consider, as part of their risk assessments, weaknesses in key controls for preventing and detecting improper payments. Control activities such as prepayment reviews and matching invoices with receiving reports are important for preventing improper payments, and contract audits—including subcontract audits and annual incurred cost audits—are intended to be a key control for detecting improper payments. However, the DOE IG found in April 2013 that, from 2010 to 2012, subcontracts with a total value in excess of $906 million had either not been audited by M&O contractors or had audits that did not meet audit standards. The report further noted that the insufficient audit coverage substantially increases the risk that improper payments will be incurred and not detected in a timely manner. In addition, DOE officials told us that contract audits, particularly for non-M&O contracts, are not always performed in a timely manner. DCAA has traditionally performed contract audits for DOE’s non-M&O contracts; however, a significant backlog of audits at the Department of Defense has impacted DCAA’s ability to Untimely contract perform work for other agencies, including DOE.audits, regardless of the cause, represent a risk that improper payments will not be identified in a timely manner. However, DOE’s 2011 guidance did not require that programs consider risk factors related to internal control weaknesses—such as untimely contract audits or inadequate subcontractor oversight. DOE’s fiscal year 2011 IPERA guidance states that programs must have an effective system of internal control to prevent and detect improper payments and to recover overpayments. The guidance also states that key controls should be tested as part of OMB Circular A-123 evaluations. A-123 is OMB’s Circular on reporting for internal controls and certain DOE officials said that during DOE’s IPERA training, financial risks.sites have been instructed to consider the results of the A-123 evaluations, which include evaluation of key risks and controls, when determining susceptibility to high risk of improper payments. In addition, DOE officials said that DOE headquarters CFO officials have reviewed A- 123 results across the department when determining the department’s overall risk. However, DOE does not require programs to consider weaknesses in its internal controls as part of its risk assessment. In our review of DOE’s fiscal year 2011 risk assessments, of the 29 sites that did risk assessments, at most, 10 included information stating that the results of A-123 evaluations were considered as part of the risk assessments. Information from A-123 evaluations on internal controls could potentially provide information relevant to assessing the risk of improper payments. However, based on the documentation provided in the fiscal year 2011 risk assessments, it was not clear how many sites considered the results of their A-123 evaluations and for those that did, how those results were factored into the risk assessment. In implementing Standards for Internal Control in the Federal Government, management is responsible for developing the detailed policies, procedures, and practices to fit agency operations and to ensure that they are an integral part of operations. In addition, according to our executive guide on strategies for managing improper payments, reducing improper payments requires a strategy appropriate to the organization involved and its particular risks. However, DOE’s 2011 IPERA guidance did not direct sites to augment the eight risk factors for a qualitative evaluation with other risk factors that might be appropriate to a program and its particular risks, so many of the payment sites did not fully consider other risk factors. In its July 2014 updated IPERA risk assessment guidance, DOE recognized the need to address other risk factors relevant to agencies’ operating environments. One revision directs payment sites to consider a ninth risk factor: Evaluate the inherent risk of improper payments due to the nature of the agency’s programs/operations. The guidance states that this new risk factor was added based on a 2014 draft revision of OMB’s improper payments guidance. However, it is unclear how DOE’s guidance will be implemented by the department’s payment sites because the guidance does not provide specific examples of potential inherent risks for improper payments—such as untimely contract audits or inadequate subcontractor oversight—that all payment sites should consider and this is not consistent with federal standards for internal control and effective strategies included in GAO’s executive guide. Without providing in its guidance specific examples of other risk factors that present inherent risks likely to contribute to improper payments and directing payment sites to consider those other factors when performing their improper payment risk assessments, DOE will not have reasonable assurance that its payment sites consistently consider other relevant risk factors to fully evaluate risks. In fiscal years 2012 and 2013, we found that DOE directed programs to report less information on improper payment risks. Specifically, DOE required fewer payment sites to report under IPERA and, for those sites that were required to report, we found that DOE requested less information on the risks of improper payments. DOE reported that it did not have any programs susceptible to significant improper payments in 2011. As previously discussed, we found that DOE did not fully consider program risks in its fiscal year 2011 risk assessments and included unreliable data, which raises questions about whether the 2011 assessments were reliable. Nonetheless, because of its 2011 determination that it did not have programs susceptible to significant improper payments, the department was not required under IPERA to prepare risk assessments in 2012 and 2013. DOE elected to conduct certain risk assessment related activities in fiscal years 2012 and 2013. However, we found the risk assessment and other related information that sites reported provided limited insight into the department’s risk of improper payments. In electing to conduct certain risk assessment related activities in fiscal years 2012 and 2013, DOE required fewer sites to report and allowed the remaining sites to provide more limited information on risk. Specifically, for fiscal years 2012 and 2013, DOE’s guidance redefined its programs, reducing the number from 55 to 43 payment sites by combining certain contractor payment sites with payment sites managed by DOE. According to DOE officials, for the purposes of IPERA reporting, cognizant DOE field offices—which are themselves payment sites—are now responsible for assessing risk for all non-M&O contracts. In addition, DOE’s fiscal year 2012 and 2013 guidance did not direct sites to submit risk assessments. Instead, the guidance directed sites to (1) prepare an overall risk assessment rating for the site of high, medium, or low based on the eight risk factors and the amount of actual improper payments identified through the normal course of business; (2) submit the overall risk rating and known improper payment information to DOE headquarters CFO; and (3) maintain any detailed risk assessment support or other detailed support for the known improper payments data. DOE’s guidance included a reporting template listing the eight risk factors and a place for payment sites to indicate their overall risk rating, which DOE prepopulated with a low risk rating. The template also included tables to report information on known improper payments. According to DOE’s fiscal years 2012 and 2013 guidance, known improper payments include, among other things, payments identified by a contractor’s internal accounting practices or those identified during the course of IG audits. Based on our review of the reporting templates that were submitted by payment sites in fiscal years 2012 and 2013, we found that 4 payment sites did not submit a reporting template in 2012, but that all sites submitted a reporting template in 2013. In addition, we found that the overall risk assessment rating for each payment site provides limited insight into DOE’s risk for improper payments. Although DOE’s 2012 and 2013 guidance directed sites to maintain support for their overall risk assessments rating, it did not require sites to submit supporting documentation for their risk ratings. The low risk designation that all of the sites provided in both years without supporting documentation did not provide information on how those sites considered the eight risk factors, how they weighed each factor against the others, or how they considered the eight factors in relation to their improper payments data to arrive at their overall risk rating. We also found that DOE’s reporting of a program’s amount of improper payments for fiscal years 2012 and 2013 also provided limited insight into DOE’s risk of improper payments. IPERA and OMB guidance do not require DOE to report total known improper payments and, although not required to, DOE reports its total known improper payments annually in its Agency Financial Report. DOE cites this reporting as evidence in determining that its programs, and the department as a whole, are at low risk for improper payments. For example, in its Fiscal Year 2013 Agency Financial Report, DOE reported that it had identified $21.8 million in improper payments made in fiscal year 2012 out of $46.5 billion in total outlays. In reporting this number, DOE did not report the full extent of its improper payments as it did not disclose information on prior year improper payments. In addition, DOE did not disclose information on settled costs, as shown in the following: Prior year improper payments. According to DOE officials, the amount of DOE’s total known improper payments does not include known improper payments identified in prior years. This means that improper payments that occurred in prior fiscal years but were not identified until a later reporting year are not included. Thus, the $21.8 million in improper payments that DOE reported in its Fiscal Year 2013 Agency Financial Report only includes improper payments made and identified during fiscal year 2012. Therefore, DOE’s reporting does not provide the full extent of DOE’s total improper payments. Specifically, DOE pays contractors throughout the year for services performed, and those charges are subject to incurred cost audits to ensure that they are allowable under the terms of the contract. If charges are ultimately found to be unallowable by DOE, those charges are considered improper payments under IPERA. The process for ultimately determining that costs are unallowable can take a considerable amount of time, and the amount of money involved can be significant. For example, in April 2012 and October 2012, the IG reported about $4.4 million in disallowed costs identified in fiscal year 2012 related to prior year payments. However, this $4.4 million was not included when DOE reported its known improper payments for fiscal year 2012 in DOE’s Fiscal Year 2013 Agency Financial Report. Settled costs. DOE’s IG and contractor internal auditors have the ability to question costs they find to be potentially unallowable under the terms of a contract. Once costs have been questioned, DOE must ultimately make a determination whether to allow or disallow those costs. Before disallowing costs, the Federal Acquisition Regulation requires agencies to “make every reasonable effort” to reach a satisfactory settlement with the contractor. In one settlement agreement we reviewed, the contractor agreed to reimburse DOE for $10 million in questioned costs, referring to them as “potential unallowable costs.” Because those costs are not explicitly identified as unallowable in the settlement agreement, DOE does not consider them improper under IPERA and therefore does not disclose them in its reporting. DOE officials told us that their reporting of current year known improper payments in their Agency Financial Report is consistent with OMB guidance. We recognize that DOE is reporting more information than is required. However, citing an amount of improper payments without further explanation is potentially misleading to external stakeholders, including Congress and the public. According to federal standards for internal control, effective communications should occur in a broad sense with information flowing down, across, and up the organization.should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. By not disclosing more information in its IPERA reporting about total known improper payments, DOE does not allow readers, including congressional and public stakeholders, to fully understand what the total known improper payments amount represents and the extent to which this improper payments total could potentially be more pervasive. Recognizing the importance of assessing the risks of improper payments, DOE issued new guidance in 2014 to address payment processes involving non-M&O contractors, to clarify the way payment sites address risk factors, and to consider inherent risks of improper payments due to the nature of the agency’s programs/operations. These are positive steps, but further efforts could help to more fully assess DOE’s risk of improper payments and make more effective use of DOE and contractor resources. Specifically, DOE’s 2014 guidance directs DOE field sites to consider the payment processes of the non-M&O contractors they oversee when completing required risk assessments. However, the guidance does not specify that those sites should address the eight risk factors as they relate to the non-M&O sites. We found that risk assessments for non-M&O payment sites were not always conducted in fiscal year 2011. Without directing in its guidance that sites address the eight risk factors as they relate to the non-M&O contractor risk assessments, the sites cannot fully assess the risk of improper payments, and DOE cannot fully understand its risks for improper payments and take corrective actions to mitigate such risks in the future. In addition, DOE’s 2014 guidance directs payment sites to include a brief explanation for each risk factor supporting the risk rating. However, the 2014 guidance does not specify how payment sites should address each factor and what supporting documentation to include as the basis for their risk rating determinations, which is inconsistent with federal standards for internal control. Without clarifying in guidance how payment sites are to address the eight risk factors and document the basis for their risk rating determinations, DOE cannot be assured that its personnel have a consistent understanding of how to complete risk assessments. In addition, the 2014 guidance does not clarify who at DOE is responsible for reviewing and approving risk assessments for consistency. Without clarifying in guidance who at DOE is responsible for reviewing and approving risk assessments consistent with federal standards for internal control, DOE may not have reasonable assurance that the assessments are receiving the same level of oversight at each site. Furthermore, DOE’s 2014 guidance directs payment sites to consider an additional, ninth risk factor on inherent risks, in its risk assessments beyond the previous eight risk factors that need to be considered to be consistent with federal standards for internal controls and GAO’s executive guide. However, it is unclear how DOE’s guidance will be implemented by the department’s payment sites because the guidance does not provide specific examples of potential inherent risks for improper payments—such as untimely contract audits or inadequate subcontractor oversight—that all payment sites should consider, and this is not consistent with GAO’s executive guide. Without providing specific examples in guidance of other risk factors that present inherent risks likely to contribute to improper payments and directing payment sites to consider those other factors when performing their improper payment risk assessments, DOE will not have reasonable assurance that its payment sites consistently consider other relevant risk factors. Finally, DOE annually reports the amount of its total known improper payments and cites this amount as a key reason why its programs and the department as a whole are low risk. However, this amount provides limited insight on the extent of improper payments and is potentially misleading. By disclosing additional information in its IPERA reporting, DOE could better position readers, including congressional and public stakeholders, to fully understand what the total known improper payments amount represents and the extent to which improper payments could potentially be more pervasive. To help improve its ability to assess the risk of improper payments and make more effective use of DOE and contractor resources, we recommend the Secretary of Energy direct the department’s Chief Financial Officer to take the following four actions to revise the department’s IPERA guidance: direct field office sites with responsibility for non-M&O contractor risk assessments to address risk factors as they relate to those sites and take steps to ensure sites implement it; clarify how payment sites are to address risk factors and document the basis for their risk rating determinations and take steps to ensure sites implement it; clarify who is responsible at DOE for reviewing and approving risk assessments for consistency across sites and take steps to ensure those entities implement it; and provide specific examples of other risk factors that present inherent risks likely to contribute to significant improper payments, in addition to the eight risk factors, direct payment sites to consider those when performing their improper payment risk assessments, and take steps to ensure sites implement it. To provide better transparency regarding its total known improper payments reported under IPERA, we recommend the Secretary of Energy direct the department’s Chief Financial Officer to improve public reporting on the amount of total known improper payments by disclosing additional information regarding this amount and the extent to which improper payments could be occurring. We provided a draft of this report to DOE for comment. In its initial comments, DOE had concerns with our recommendation to disclose more information on its total known improper payments number included in its Agency Financial Report. In reviewing DOE’s initial comments, it was clear there was a misunderstanding about the intent of the recommendation. Subsequently, we discussed the recommendation with DOE officials, clarified our intent, and modified the recommendation to ensure that DOE discloses information on the extent of improper payments that could be occurring. In its final comments, reproduced in appendix II, DOE concurred with all five of our recommendations. DOE also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Energy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines the extent to which the Department of Energy (DOE) assesses its programs’ risks for improper payments. To determine this, we reviewed the Improper Payments Elimination and Recovery Act of 2010 (IPERA). For additional context, we also reviewed the Improper Payments Information Act of 2002 and the Improper Payments Elimination and Recovery Improvement Act of 2012. We examined the Office of Management and Budget’s (OMB) and DOE’s IPERA guidance. We reviewed relevant effective practices for conducting risk assessments as described in our executive guide on strategies for managing improper payments. Given the relevance and stated importance of DOE’s Cooperative Audit Strategy, we analyzed the strategy and related documents, including the DOE Office of Inspector General (IG) Audit Manual, the DOE Financial Management Handbook, contractor incurred cost audits, and IG reviews of those audits. We interviewed key officials with the DOE headquarters Office of the Chief Financial Officer (CFO). Specifically, we met with officials from the Office of Financial Control and Reporting within the Office of the CFO, which carries out DOE’s efforts to comply with IPERA by issuing guidance and consolidating and reporting information annually in DOE’s Agency Financial Report. We discussed DOE’s process for implementing IPERA, including how payment site risk assessments were reviewed and approved by DOE and how headquarters conducted risk assessments on the grant and loan programs. We interviewed IG officials to discuss their role in overseeing DOE’s IPERA implementation and DOE’s strategy to oversee the auditing of its contractors’ incurred costs. We reviewed the IG’s fiscal year 2011, 2012, and 2013 IPERA compliance audits, including how they were conducted and their findings, conclusions, and recommendations. We determined that these reports were sufficiently reliable for the purposes of using them to support our results. For fiscal years 2011 through 2013, we analyzed DOE’s IPERA reporting, including qualitative risk assessments and quantitative information. We choose to review fiscal years 2011 through 2013 because those were the years subject to IPERA requirements for which we had available documentation. We reviewed each risk assessment to determine if it (1) contained narrative responses specifically taking into account the eight factors; (2) provided a basis for the risk determination; and (3) if it was a DOE field office, whether it specifically addressed the eight risk factors with regard to any non-M&O contractors they oversee. We also determined if the risk assessment documented consideration of evaluations conducted pursuant to OMB Circular A-123. To assess the reliability of financial data used in DOE’s payment site risk assessments, we compared the figures reported in all payment site risk assessments associated with known improper payments and outlays with the aggregated figures contained in DOE’s Fiscal Year 2011 Agency Financial Report. Where applicable and appropriate, we also compared the figures reported in payment site risk assessments with the back-up documentation provided by various specific DOE payment sites (or “programs”). We assessed the reliability of financial data used in DOE’s payment site risk assessments. To gain additional context related to documenting these analyses, we also reviewed our Standards for Internal Control in the Federal Government. We visited two DOE field CFOs in Oak Ridge, Tennessee, and Albuquerque, New Mexico, and with officials from DOE’s Oak Ridge Financial Service Center. We chose these two locations because they oversee IPERA reporting for M&O and non-M&O contracts that accounted for about 28 percent of DOE’s IPERA reported outlays in fiscal year 2013. In addition, we selected the Oak Ridge Financial Services Center to visit because it handles all payments made to non-M&O contracts. DOE’s 11 field CFO’s, in cooperation with site-located contracting officers, oversee contactor and other activities in the field and assist DOE headquarters in carrying out IPERA. We discussed how DOE payment sites were implementing IPERA and how payment site risk assessments were reviewed by DOE. During these trips, we also met with six contractor site locations overseen by these field CFOs. These six contractor locations include the following: East Tennessee Technology Park; Los Alamos National Laboratory; Oak Ridge Associated Universities; Oak Ridge National Laboratory; Sandia National Laboratory; and Y-12 National Security Complex. We choose to visit these payment sites because they represent a cross section of the types of contractor payments made at DOE and because they accounted for about 38 percent of DOE’s total outlays in fiscal year 2013. At each payment site, we met with contractor CFO and internal audit officials, as well as the cognizant DOE contracting officer. During our meetings, we obtained perspectives from over 70 DOE and contractor officials involved with IPERA reporting, including those that had prepared or reviewed improper payment risk assessments. We also discussed the guidance and direction provided by DOE to payment sites in implementing IPERA, as well as consistency across DOE payment sites in preparing risk assessments. We reviewed prior GAO and IG reports that identified deficiencies in DOE internal controls, such as subcontract audits and annual incurred cost audits, including how they were conducted and their findings, conclusions, and recommendations. We also reviewed the IG’s fiscal year 2011, 2012, and 2013 IPERA compliance audits, including how they were conducted and their findings, conclusions, and recommendations. We interviewed IG officials to discuss their prior reports and their role in overseeing DOE’s IPERA implementation and DOE’s strategy to oversee the auditing of its contractors’ incurred costs. We determined that these reports were sufficiently reliable for the purposes of using them to support our results. In addition to the individual named above, Diane LoFaro (Assistant Director), Cheryl Arvidson, Vaughn Baltzly, Mark Braza, Mark Keenan, Jason Kirwan, Phillip McIntyre, Jeanette Soares, Kiki Theodoropoulos, Nicholas Weeks, and William Woods made key contributions to this report.
Improper payments are a significant problem in the federal government. To address this problem, IPERA requires that federal agencies review their programs and identify those that are susceptible to significant improper payments—a process known as a risk assessment. DOE's history of inadequate management and oversight of its contractors led GAO to designate DOE's contract management as a high-risk area vulnerable to fraud, waste, abuse, and mismanagement. However, DOE reported that it does not have any programs susceptible to significant improper payments. GAO was asked to review DOE's internal control environment, as it relates to IPERA, to determine whether the department was at low risk for significant improper payments. This report examines the extent to which DOE assessed its programs' risks for improper payments in fiscal years 2011 through 2013. GAO reviewed IPERA, analyzed all risk assessments and related information for this period, and interviewed DOE officials and six contractors selected to represent the types of contractor payments made. The Department of Energy (DOE) developed a process to assess its programs for risks of improper payments, but its assessments do not fully evaluate risk. To comply with the Improper Payments Elimination and Recovery Act of 2010 (IPERA), in fiscal year 2011, DOE directed its programs to develop risk assessments using eight qualitative risk factors, such as recent major changes in program funding, and report quantitative information on improper payments. GAO found that 26 of 55 programs did not prepare risk assessments in 2011 and that the quantitative information reported, including the estimated amount of improper payments, was not reliable because, for example, it did not include information for all programs. In reviewing DOE's 2011 risk assessments, GAO also found the following: DOE did not always include a clear basis for risk determinations . At least 6 of the 29 programs that prepared risk assessments did not take into account the eight qualitative risk factors, making the basis of their risk determinations unclear. At most, the assessments for 23 programs took into account the risk factors. However, support for their determinations varied widely, and some did not contain enough information to identify how the program arrived at its risk determination, which is inconsistent with federal standards for internal control. DOE's guidance directs personnel to prepare a risk assessment that considers these eight factors but does not provide further direction on what to include. Absent such direction, DOE personnel may not have a consistent understanding of how to complete their risk assessments. DOE did not fully evaluate other relevant risk factors . DOE's risk assessments did not fully evaluate other relevant risk factors, such as weaknesses in key controls for preventing and detecting improper payments—including inadequate subcontractor oversight. GAO found that some risk assessments included information from internal control evaluations, but many did not. DOE guidance does not instruct personnel to consider weaknesses in key controls for preventing and detecting improper payments. Without providing specific examples of other relevant risk factors in guidance and directing personnel to consider them when performing risk assessments, DOE will not have reasonable assurance that each of its programs fully evaluates risks. Based on its 2011 assessments, DOE was not required under IPERA to prepare risk assessments or report on the amount of improper payments in 2012 and 2013. However, not fully considering program risks in its 2011 assessments and including unreliable data raises questions about whether the 2011 assessments were reliable. GAO recommends that DOE take steps to improve its risk assessments including revising guidance on how programs are to address risk factors and providing examples of other risk factors likely to contribute to improper payments and directing programs to consider those factors. DOE concurred with GAO's recommendations.
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard’s responsibilities fall into two general categories—those related to homeland security missions, such as port security and vessel escorts, and those related to non-homeland security missions, such as search and rescue and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft and, through its Deepwater Program, is currently modernizing or replacing those assets. At the start of Deepwater in the late 1990s, the Coast Guard chose to use a system of systems acquisition strategy that was intended to replace the assets with a single, integrated package of aircraft, vessels, and communications systems. As the systems integrator, ICGS was responsible for designing, constructing, deploying, supporting, and integrating the assets. The decision to use a systems integrator for the Deepwater Program was driven in part because of the Coast Guard’s lack of expertise in managing and executing an acquisition of this magnitude. Under this approach, the Coast Guard provided the contractor with broad, overall performance specifications—such as the ability to interdict illegal immigrants—and ICGS determined the specifications for the Deepwater assets. According to Coast Guard officials, the ICGS proposal was submitted and priced as a package; that is, the Coast Guard bought the entire solution and could not reject any individual component. Deepwater assets are in various stages of the acquisition process. Some, such as the NSC and Maritime Patrol Aircraft, are in production. Others, such as the Fast Response Cutter, are in design, and still others, such as the Offshore Patrol Cutter, are in the early stages of requirements definition. Since the Commandant’s April 2007 announcement that the Coast Guard was taking over the lead role in systems integration from ICGS, the Coast Guard has undertaken several initiatives that have increased accountability for Deepwater outcomes within the Coast Guard and to DHS. The Coast Guard’s Blueprint for Acquisition Reform sets forth a number of objectives and specific tasks with the intent of improving acquisition processes and results. Its overarching goal is to enhance the Coast Guard’s mission execution through improved contracting and acquisition approaches. One key effort in this regard was the July 2007 consolidation of the Coast Guard’s acquisition responsibilities—including the Deepwater Program—into a single acquisition directorate. Previously, Deepwater assets were managed independently of other Coast Guard acquisitions within an insulated structure. The Coast Guard has also vested its government project managers with management and oversight responsibilities formerly held by ICGS. The Coast Guard is also now managing Deepwater under an asset-based approach, rather than as an overall system-of-systems as initially envisioned. This approach has resulted in increased government control and visibility. For example, cost and schedule information is now captured at the individual asset level, resulting in the ability to track and report cost breaches for assets. Under the prior structure, a cost breach was to be tracked at the overall Deepwater Program level, and the threshold was so high that a breach would have been triggered only by a catastrophic event. To manage Deepwater acquisitions at the asset level, the Coast Guard has begun to follow a disciplined project management process using the framework set forth in its Major Systems Acquisition Manual. This process requires documentation and approval of program activities at key points in a program’s life cycle. The process begins with identification of deficiencies in Coast Guard capabilities and then proceeds through a series of structured phases and decision points to identify requirements for performance, develop and select candidate systems that meet those requirements, demonstrate the feasibility of selected systems, and produce a functional capability. Previously, the Coast Guard authorized the Deepwater Program to deviate from the structured acquisition process, stating that the requirements of the process were not appropriate for the Deepwater system-of-systems approach. Instead, Deepwater Program reviews were required on a schedule-driven—as opposed to the current event-driven—basis. Further, leadership at DHS is now formally involved in reviewing and approving key acquisition decisions for Deepwater assets. We reported in June 2008 that DHS approval of Deepwater acquisition decisions as part of its investment review process was not required, as the department had deferred decisions on specific assets to the Coast Guard in 2003. We recommended that the Secretary of DHS direct the Under Secretary for Management to rescind the delegation of Deepwater acquisition decision authority. In September 2008, the Under Secretary took this step, so that Deepwater acquisitions are now subject to the department’s investment review process, which calls for executive decision making at key points in an investment’s life cycle. We also reported this past fall, however, that DHS had not effectively implemented or adhered to this investment review process; consequently, the department had not provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. Without the appropriate reviews, DHS loses the opportunity to identify and address cost, schedule, and performance problems and, thereby, minimize program risk. We reported that 14 of the department’s investments that lacked appropriate review experienced cost growth, schedule delays, and underperformance—some of which were substantial. Other programs within DHS have also experienced cost growth and schedule delays. For example, we reported in July 2008 that the Coast Guard’s Rescue 21 system was projected to experience cost increases of 184 percent and schedule delays of 5 years after rebaselining. DHS issued a new interim management directive on November 7, 2008, that addresses many of our findings and recommendations on the department’s major investments. If implemented as intended, the more disciplined acquisition and investment review process outlined in the directive will help ensure that the department’s largest acquisitions, including Deepwater, are effectively overseen and managed. While the decision to follow the Major Systems Acquisition Manual process for Deepwater assets is promising, the consequences of not following this acquisition approach in the past—when the contractor managed the overall acquisition—are now apparent for assets already in production, such as the NSC, and are likely to pose continued problems, such as increased costs. Because ICGS had determined the overall Deepwater solution, the Coast Guard had not ensured traceability from identification of mission needs to performance specifications for the Deepwater assets. In some cases it is already known that the ICGS solution does not meet Coast Guard needs, for example: The Coast Guard accepted the ICGS-proposed performance specifications for the long-range interceptor, a small boat intended to be launched from larger cutters such as the NSC, with no assurance that the boat it was buying was what was needed to accomplish its missions. Ultimately, after a number of design changes and a cost increase from $744,621 to almost $3 million, the Coast Guard began to define for itself the capabilities it needed and has decided not to buy any more of the ICGS boats. ICGS had initially proposed a fleet of 58 fast response cutters, subsequently termed the Fast Response Cutter-A (FRC-A), which were to be constructed of composite materials (as opposed to steel, for example). However, the Coast Guard suspended design work on the FRC-A in February 2006 to assess and mitigate technical risks. Ultimately, because of high risk and uncertain cost savings, the Coast Guard decided not to pursue the acquisition, a decision based largely on a third-party analysis that found the composite technology was unlikely to meet the Coast Guard’s desired 35-year service life. After obligating $35 million to ICGS for the FRC-A, the Coast Guard pursued a competitively awarded fast response cutter based on a modified commercially available patrol boat. That contract was awarded in September 2008. Although the shift to individual acquisitions is intended to provide the Coast Guard with more visibility and control, key aspects still require a system-level approach. These aspects include an integrated C4ISR system—needed to provide critical information to field commanders and facilitate interoperability with the Department of Defense and DHS—and decisions on production quantities of each Deepwater asset the Coast Guard requires to achieve its missions. The Coast Guard is not fully positioned to manage these aspects under its new acquisition approach but is engaged in efforts to do so. C4ISR is a key aspect of the Coast Guard’s ability to meet its missions. How the Coast Guard structures C4ISR is fundamental to the success of the Deepwater Program because C4ISR encompasses the connections among surface, aircraft, and shore-based assets and the means by which information is communicated through them. C4ISR is intended to provide operationally relevant information to Coast Guard field commanders to allow the efficient and effective execution of their missions. However, an acquisition strategy for C4ISR is still in development. Officials stated that the Coast Guard is revisiting the C4ISR incremental acquisition approach proposed by ICGS and analyzing that approach’s requirements and architecture. In the meantime, the Coast Guard is continuing to acquire C4ISR through ICGS. As the Coast Guard transitions from the ICGS-based system-of-systems acquisition strategy to an asset-based approach, it will need to maintain a strategic outlook to determine how many of the various Deepwater assets to procure to meet Coast Guard needs. When deciding how many of a specific vessel or aircraft to procure, it is important to consider not only the capabilities of that asset, but how it can complement or duplicate the capabilities of the other assets with which it is intended to operate. To that end, the Coast Guard is modeling the planned capabilities of Deepwater assets, as well as the capabilities and operations of existing assets, against the requirements for Coast Guard missions. The intent of this modeling is to test each planned asset to ensure that its capabilities fill stated deficiencies in the Coast Guard’s force structure and to inform how many of a particular asset are needed. However, the analysis based on the modeling is not expected to be completed until the summer of 2009. In the meantime, Coast Guard continues to plan for asset acquisitions in numbers very similar to those determined by ICGS, such as 8 NSCs. Like many federal agencies that acquire major systems, the Coast Guard faces challenges in recruiting and retaining a sufficient government acquisition workforce. In fact, one of the reasons the Coast Guard originally contracted with ICGS as a systems integrator was the recognition that the Coast Guard lacked the experience and depth in its workforce to manage the acquisition itself. The Coast Guard’s 2008 acquisition human capital strategic plan sets forth a number of workforce challenges that pose the greatest threats to acquisition success, including a shortage of civilian acquisition staff , its military personnel rotation policy, and the lack of an acquisition career path for its military personnel. The Coast Guard has taken a number of steps to hire more acquisition professionals, including the increased use of recruitment incentives and relocation bonuses, utilizing direct hire authority, and rehiring government annuitants. The Coast Guard also recognizes the impact of military personnel rotation on its ability to retain people in key positions. Its policy of 3-year rotations of military personnel among units, including to and from the acquisition directorate, limits continuity in key project roles and can have a serious impact on acquisition expertise. While the Coast Guard concedes that it does not have the personnel required to form a dedicated acquisition career field for military personnel, such as that found in the Navy, it is seeking to improve the base of acquisition knowledge throughout the Coast Guard by exposing more officers to acquisition as they follow their regular rotations. In the meantime, the lack of a sufficient government acquisition workforce means that the Coast Guard is relying on contractors to supplement government staff, often in key positions such as cost estimators, contract specialists, and program management support. While support contractors can provide a variety of essential services, when they are performing certain activities that closely support inherently governmental functions their use must be carefully overseen to ensure that they do not perform inherently governmental roles. Conflicts of interest, improper use of personal services contracts, and increased costs are also potential concerns of reliance on contractors. In response to significant problems in achieving its intended outcomes under the Deepwater Program, the Coast Guard leadership has made a major change in course in its management and oversight by re-organizing its acquisition directorate, moving away from the use of a contractor as the systems integrator, and putting in place a structured, more disciplined acquisition approach for Deepwater assets. While the initiatives the Coast Guard has underway have begun to have a positive impact, the extent and duration of this impact depend on positive decisions that continue to increase and improve government management and oversight. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions you or members of the subcommittee may have at this time. For further information about this testimony, please contact John P. Hutton, Director, at 202-512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has a large body of work examining government agencies' approaches to managing their large acquisition projects. GAO has noted that without sufficient knowledge about system requirements, technology, and design maturity, programs are subject to cost overruns, schedule delays, and performance that does not meet expectations. The Deepwater Program, intended to replace or modernize 15 major classes of Coast Guard assets, accounts for almost 60 percent of the Coast Guard's fiscal year 2009 appropriation for acquisition, construction and improvements. GAO has reported over the years on this program, which has experienced serious performance and management problems such as cost breaches, schedule slips, and assets designed and delivered with significant defects. To carry out the Deepwater acquisition, the Coast Guard contracted with Integrated Coast Guard Systems (ICGS) as a systems integrator. In April 2007, the Commandant acknowledged that the Coast Guard had relied too heavily on contractors to do the work of government and announced that the Coast Guard was taking over the lead role in systems integration from ICGS. This testimony reflects our most recent issued work on Deepwater, specifically our June 2008 report, Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain, GAO-08-745 . Over the past two years, the Coast Guard has reoriented its acquisition function to position itself to execute systems integration and program management responsibilities formerly carried out by ICGS. The acquisition directorate has been consolidated to oversee all Coast Guard acquisitions, including the Deepwater Program, and Coast Guard project managers have been vested with management and oversight responsibilities formerly held by ICGS. Another key change has been to manage the procurement of Deepwater assets on a more disciplined, asset-by-asset approach rather than as an overall system of systems, where visibility into requirements and capabilities was limited. For example, cost and schedule information is now captured at the individual asset level, resulting in the ability to track and report breaches for assets. Further, to manage Deepwater acquisitions at the asset level, the Coast Guard has begun to follow a disciplined project management process that requires documentation and approval of program activities at key points in a program's life cycle. These process changes, coupled with strong leadership to help ensure the processes are followed in practice, have helped to improve Deepwater management and oversight. However, the Coast Guard still faces many hurdles going forward and the acquisition outcome remains uncertain. The consequences of not following a disciplined acquisition approach for Deepwater acquisitions and of relying on the contractor to define Coast Guard requirements are clear now that assets, such as the National Security Cutter, have been paid for and delivered without the Coast Guard's having determined whether the assets' planned capabilities would meet mission needs. While the asset-based approach is beneficial, certain cross-cutting aspects of Deepwater--such as command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) and the overall numbers of each asset needed to meet requirements--still require a system-level approach. The Coast Guard is not fully positioned to manage these aspects. One of the reasons the Coast Guard originally contracted with ICGS as the systems integrator was the recognition that the Coast Guard lacked the experience and depth in workforce to manage the acquisition itself. The Coast Guard has faced challenges in building an adequate government acquisition workforce and, like many other federal agencies, is relying on support contractors--some in key positions such as cost estimating and contract support. GAO has pointed out the potential concerns of reliance on contractors who closely support inherently governmental functions.
In 1989, the Pacific Area Office, then called the Western Regional Office, identified several deficiencies in the 935 ZIP Code area and proposed relocating the distribution operations for five post offices in the area into a new facility. The key deficiencies identified by postal officials included the following: space deficiencies for mail processing operations in the Mojave MPO, which is responsible for mail processing operations for all of the post offices in the Antelope Valley; space deficiencies in carrier delivery operations in four of the five post offices affected by the proposed project; and space deficiencies in the Lancaster MPO limited the ability to meet demand for post office boxes, and parking for customers, employees, and postal vehicles. Figure 1 shows the locations of the five affected post offices in the cities of Lancaster, Mojave, Palmdale, Tehachapi, and Ridgecrest located in the southern portion of the Antelope Valley. Since the 1980 census, the Antelope Valley area, also known as the 935 ZIP Code area, has more than doubled its population. The growth in mail volume has paralleled the population growth. As shown in table 1, growth in this area was somewhat slower in the 1990s than in the 1980s. However, current projections expect that population and mail growth will accelerate again over the next decade. Over half of the population growth in the 935 ZIP Code area occurred in two cities, Lancaster and Palmdale. From 1980 to 1990, Lancaster’s population grew from about 48,000 to 97,300, and Palmdale’s population grew from about 12,300 to 68,900. During this same period, Mojave’s population grew from about 2,900 to 3,800. The Southern California Association of Governments has projected that the Lancaster-Palmdale population would increase again over 200 percent by 2010. Mail scheduled for final delivery in the Antelope Valley originates from all over the United States and the rest of the world and is transported to the Los Angeles Processing and Distribution center located near Los Angeles International Airport. There, the mail undergoes a first-level sort by the first three digits of the ZIP Code. The mail is then transported to smaller mail processing facilities, such as the Mojave MPO, where secondary operations are performed on automated equipment to sort the mail to the five-digit ZIP Code level. Generally at this stage, some of the mail would also be automatically sorted to the carrier-route level and sequenced in the order that carriers deliver it. However, in Mojave, the necessary automated equipment is not available for sorting mail down to the carriers’ delivery sequence order. Thus, the mail is transported to the postal facilities responsible for mail delivery, such as Lancaster, where the mail carriers manually sort the mail into delivery sequence order. Administrative support and mail processing functions for mail to be delivered in the 935 ZIP Code area, as well as local retail and delivery functions, are housed at the MPO in Mojave. According to available postal documents, the Mojave MPO was functioning at its maximum capacity in 1990. Mail processing and customer service operations competed for space in the crowded facility. Operational efficiency was beginning to suffer due to the continual shifting of equipment to allow adequate space for processing operations. More recently, postal documents noted that some automated sorting equipment intended for Mojave processing operations was being stored in warehouses due to insufficient space. Postal documents from 1990 also reported that the Lancaster MPO had reached its maximum capacity and could not accommodate the future growth anticipated in Lancaster. Carrier operations had spread onto the loading platform, where mail was being placed to await distribution. Both employees and mail were exposed to weather conditions. There was a demand for additional post office boxes at the MPO, but there was no room to expand the box section. According to the Service, employee support facilities were inadequate; and parking facilities for customer, employee, and postal vehicles were also inadequate. Similar conditions reportedly existed in the Palmdale MPO, and a facility replacement was included in the Western Region’s Five-Year Facility plan. The MPOs in Ridgecrest and Tehachapi were also reported to be experiencing space deficiencies but not to the extent of the problems in Lancaster, Mojave, and Palmdale. The proposed new Antelope Valley facility would include mail-processing operations and support functions that are currently located at the Mojave MPO, and the secondary mail-processing operations would be relocated from the Palmdale, Ridgecrest, Mojave, and Tehachapi MPOs to the new facility. The Mojave MPO would be retained and would continue to provide retail and delivery services for the area and serve as a transfer point for those areas north and west of Mojave. The existing Lancaster MPO would be retained to serve as a carrier annex for carrier delivery operations. The Palmdale, Tehachapi, and Ridgecrest MPOs would be retained to provide full retail and delivery services for their areas. To evaluate the Service’s approval process for this project, we performed the following: obtained and reviewed Service policies and guidance in effect when the project began and the policies and guidance currently in effect for facility planning, site acquisition, and project approval; obtained and analyzed Service documents related to the proposed Antelope Valley project and project approval process; discussed the proposed project and the review process with Service officials in Headquarters, the Pacific Area Office, the Van Nuys District, and the Lancaster and Mojave MPOs; observed operating conditions at the existing Lancaster and Mojave postal facilities and visited the postal-owned site in Lancaster that was purchased in 1991; reviewed cost estimates for the two alternatives under consideration prior to the project being placed on hold in March 1999; these cost estimates were included in draft project approval documents that were submitted for headquarters review in February 1999; and discussed the impact of the proposed project with community officials in Mojave, Kern County, and Lancaster, CA. We did not evaluate whether this project should be approved or funded. The Service has a process and criteria for assessing and ranking capital facility projects for funding. However, we only reviewed this particular project and, therefore, did not have a basis for comparing its merits with those of other capital projects competing for approval and funding. We also did not independently verify the accuracy of the financial data included in the Postal Service’s analyses of the cost of various alternatives under consideration. Postal officials acknowledged that these preliminary cost estimates might need corrections and revisions because they had not completed their review of the project approval documents. Due to the incomplete status of this project, our assessment generally covered the requirements followed and actions taken by the Service during the period (1) from project initiation in 1989 until the first suspension in 1992 and (2) since its reinstatement in 1995 to August 1999. We conducted our review between December 1998 and August 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General. We received written comments from the Postmaster General, which we have included in appendix I. His comments are discussed near the end of this report. The Service followed most of its key requirements for acquiring a site in Lancaster prior to obtaining approval for the proposed Antelope Valley project, although some requirements were vague. One major exception was that the Headquarters CIC did not review and approve the proposed project justification and alternatives under consideration prior to advance site acquisition, as required by Service policies. The Service’s guidance allowed advance site acquisition before all analyses that were required for final project approval were completed if, among other requirements, the Service believed that the preferred site would not be available when project approval was anticipated. Table 2 presents the key requirements in the Service’s major facility project approval process and the actions taken by the Service to meet those requirements prior to project suspension in 1992. The key requirements of this project approval process include formal documentation, and the dates provided are based on available documentation. The Postal Service’s guidance detailing its investment policies and procedures for major facilities explains that its purpose is to ensure that major facility investments support the strategic objectives of the Postal Service, make the best use of available resources, and establish management accountability for investment decisions. Postal Service policies also specify the delegation of authority for approving capital facility projects based on total project costs. All capital projects exceeding $10 million in total project costs are considered major facility projects and are required to obtain final approval from the Postal Service’s Board of Governors after being approved through appropriate area and headquarters officials, including the Headquarters CIC. Some facility projects may be funded from the area’s budget. To obtain funding from headquarters capital investment funds, these proposed major capital facility projects must be prioritized along with proposed projects from all other regions/areas and included by headquarters officials in the Postal Service’s Five-Year Major Facilities Priority List. This list is to be updated annually and included as part of the Service’s annual budget, which is then reviewed and approved by postal management and the Board of Governors. As shown in table 2, the Service generally followed its approval process for advance site acquisition. However, one major requirement that was not completed before the advance site acquisition was the Advance Project Review, which involves the review and approval of the project justification and alternatives by the Headquarters CIC. Postal officials told us that the project had met all of the Service’s requirements prior to approval for advance site acquisition. However, the Service could not provide a date for when the Headquarters CIC meeting occurred or any documentation of the completion of the Advance Project Review stage. The purpose of the Advance Project Review by the Headquarters CIC, according to postal guidance, is “to be sure that the Headquarters CIC concurs with the scope (especially the justification, alternatives, and strategic compatibility) before the expenditure of substantial planning resources.” According to the Service’s requirements that were in effect in 1991, advance site acquisition was permitted prior to completion of the project approval process with the approval of the headquarters senior official responsible for facilities. The regional postmaster general requested site acquisition in advance of project approval for the site in Lancaster on June 25, 1991. The request noted that Western Region officials had approved funding from the region’s budget for site acquisition in fiscal year 1991. In addition, the request noted that the project was a headquarters-funded project scheduled to be presented to the Headquarters CIC for review in mid 1992, go to the Board of Governors for review and approval in August 1992, and begin construction in fiscal year 1992. The request also noted that control of the site expired on June 30, 1991, and that failure to acquire the site as an advance site acquisition may result in its loss. The total project cost was estimated at just over $31 million, with site purchase in the amount of $6,534,000, and site support costs of $100,000 for a total funding request of $6,634,000 for advance site acquisition. The request also noted that the property-owner had offered the Postal Service an additional saving of $250,000, which would reduce the sales price to $6,284,000, if the site acquisition were approved and closing occurred prior to August 1, 1991. The funding request was approved by the appropriate headquarters official, and the site was purchased for $6,534,000 on October 25, 1991. Service guidance required that alternatives be identified and analyzed before a project could qualify for advance site acquisition but did not clearly state the type or depth of analyses required. At the time of the Lancaster site acquisition, some analyses, such as the space requirements (which determine sizes of buildings and site requirements for operational needs) as well as the cost estimates of project alternatives (which provide information on projected cash flows and return on investment) were still under development. Only the estimated project costs associated with the preferred alternative—construction of a new processing facility in Lancaster—were available prior to site acquisition. Moreover, the available documentation did not explain why this alternative was preferred over the other alternatives considered. According to documentation provided to us, four alternatives were presented at the project planning meeting held in June 1990. The four alternatives, with the key differences underscored, were as follows: (A) a new area mail processing center in Lancaster for relocated mail processing operations, distribution operations, and delivery services for the 93535 ZIP Code area; the existing Lancaster MPO would retain its retail and delivery services; (B) a new general mail facility in Lancaster for relocated mail processing operations, distribution operations, and delivery services for the 93535 ZIP Code area; the existing Lancaster MPO would retain its delivery services and retail services would be relocated in the area; (C) new area mail processing center in the vicinity of Mojave and Lancaster for relocated mail processing operations and distribution operations; the existing Mojave and Lancaster MPOs would retain retail and delivery services for their respective communities and a new facility would be constructed in Lancaster for delivery services; and (D) lease and modify an existing building for use as a Mail Handling Annex for relocated mail processing operations and distribution operations; the existing Mojave MPO would retain its retail and delivery services. “The alternatives were discussed at length. Alternative A, B, and C were discussed. It was agreed upon that these alternatives will solve the major operating needs of the Antelope Valley, but will not address all of our needs for delivery and retail facilities. A reassessment of the proposed concept and the requirements for Lancaster and Palmdale Main Post Offices will be conducted following site selection to ascertain whether the specific site is conducive to delivery or retail activities as a result of its location.” “The existing facilities in Lancaster, Palmdale, and Mojave could not be expanded to provide sufficient space to accommodate the current and projected growth in the Antelope Valley. Continuation of mail processing operations at the Mojave MPO will not meet corporate goals for improved delivery times and efficiencies.” However, since the proposed project was revised in 1998, expansion of the existing Mojave facility was one of two alternatives under consideration, along with the preferred alternative to construct a new facility on the Service-owned site in Lancaster. Available documentation did not explain why expansion of the existing Mojave facility was not considered viable in 1990 but was considered a viable alternative in 1998. The problem of inadequate documentation of the Service’s real estate acquisition decisions is not a new issue. In 1989, we reviewed the Service’s real estate acquisition process. At that time, we reviewed a sample of 246 sites purchased during fiscal year 1987 and made recommendations to improve the Service’s real estate acquisition program. Our 1989 report found that the Service usually purchased sites that exceeded both its operational needs and advertised size requirements. When alternative sites were available for purchase, the Service generally selected the larger, more costly sites without requiring site selection committees to document why less expensive alternative sites were less desirable. The report raised concerns, based on the Service’s requirements for advertising and purchasing practices, that the Service might be spending more than was necessary for land and accumulating an unnecessarily large real estate inventory. The report also recognized that sometimes larger, more costly sites may best meet the Service’s operational requirements but that justification for such selections should be required when smaller, less costly contending sites were available. In the Service’s letter dated August 25, 1989, responding to a draft of that report, the Postmaster General agreed with our recommendation relating to more complete documentation of the selection process. He stated, “The Postal Service is concerned only with the best value and will make sure that the reasoning behind the determination of best value is more carefully documented in the future.” However, improvement in documentation was not evident in the documentation related to the proposed Antelope Valley area project, which was prepared soon after our report was issued. We identified inconsistencies in internal postal memorandums related to the required site size and disposition of any excess land. The region’s June 25, 1991, memorandum requesting approval for advance site acquisition in Lancaster stated, “No excess land is expected to remain.” Another internal memorandum dated October 25, 1991—the date of final settlement for the purchase of the Lancaster site—discussed preparation of the final cost estimates for the proposed Antelope Valley Area project and stated “Please note that the required site is considerably less than the selected site.” Further, a February 1992 internal memorandum noted that the Lancaster site was purchased in late 1991 and that the site area exceeded Service requirements by 296,000 square feet (about 6.8 acres). The reason for the purchase of a site that was larger than needed was not explained in any available documents. More recent documents related to the proposed project alternatives also noted that the Service-owned site in Lancaster exceeds project requirements, but the alternatives do not discuss how the excess property would be disposed of. As of the beginning of July 1999, the Service’s consideration of the proposed Antelope Valley project had been put on hold, and a decision may not be made for some time. Consequently, the status and funding of the proposed project remains uncertain almost 10 years after it was initiated. Consideration of the project has been delayed due to two suspensions, reductions in capital investment spending, and a recent reclassification of the proposed facility. As a result, processing and delivery deficiencies that were identified as critical for this area in 1989 continue to exist, and the Service has not determined how it plans to address these operational deficiencies. In addition, the Service has incurred additional costs that have resulted from the need to repeat analyses and update documents required for final project approval. With the project currently on hold, further costs may be incurred to again update required analyses. Finally, the delays have prolonged the uncertainty related to business development opportunities for the affected communities of Mojave and Lancaster. Initiated in 1989, with an expectation that the project would be funded in fiscal year 1992, the proposed Antelope Valley project was suspended in 1992, while the Service was undergoing a reorganization and had reduced its funding for capital facility projects. Table 3 shows that between 1991 and 1995, the Service committed $999 million less to its facilities improvement program than it had originally authorized in its 1991 to 1995 Capital Improvement Plan. Postal Service officials could not explain why the classification of this project, as a processing facility or other type of capital facility, has been changed several times and why it has not yet been submitted for consideration in the headquarters capital facility projects prioritization and funding process. All major mail processing facilities must be funded from the headquarters capital facility budget, while other types of processing and delivery facilities may be funded from regional/area budgets. At the time that the proposed project was suspended in 1992, it was classified as a mail processing facility in the Western Region/Pacific Area Major Facility Priority List. It had also been submitted for headquarters funding consideration in the Five-Year Major Facilities Priority List for fiscal years 1991 to 1995. The project was reinstated and reclassified in 1995 as a Delivery and Distribution Center (DDC), with the expectation that it would be funded out of area funds in fiscal year 1998. The Service suspended the project a second time in March 1999, while it was undergoing review by headquarters officials. Based upon the headquarters review, the project was again reclassified from a DDC to a Processing and Distribution Center. The latest reclassification meant that the project would have to be funded by headquarters rather than the Pacific Area Office, and it would have to compete nationally for funding. This means that the project will have to await placement on the next headquarters Five-Year Major Facilities Priority List, which is scheduled to be completed by August 2000. It is also not clear why the proposed project was reinstated and reclassified in 1995 as a DDC when the major purpose and design of this project had not fundamentally changed. Postal officials in the Pacific Area Office and Van Nuys District said that the recently proposed Antelope Valley project is essentially the same as the project that was being planned when the Service acquired the 25-acre Lancaster site in 1991. The major differences in the two projects are in nonmail processing areas. As previously mentioned, the proposed project had not had an Advance Project Review by the Headquarters CIC prior to the suspension in 1992. Such a review might have prevented the unexplained reclassifications of this project that have contributed to delays in its funding. Ten years after this project began, the operational processing and delivery deficiencies that were identified as critical for this area in 1989 still remain. Because of continued space deficiencies, automated equipment has not been deployed as scheduled, and the projected operating efficiencies and savings have not been realized. The District projected that one of the benefits from automated sorting of the mail to the carriers in delivery walk sequence would be to improve delivery performance by 4.25 percent annually. This additional sorting would decrease the time that the carriers spend in the delivery units preparing the mail for delivery and increase the amount of time the carriers would have to deliver the mail. Another negative effect of the space deficiencies in Mojave was that some of the mail originating in the 935 ZIP Code area (approximately 130,000 pieces per day) was diverted from processing in Mojave to the processing facility in Santa Clarita. According to local postal officials, the effect of this diversion was to delay by 1 day the delivery of some mail that was to be delivered in the 935 ZIP Code area. The local area First-Class mail was supposed to be delivered within 1 day to meet overnight delivery standards for First-Class mail. Since this project was initiated in 1989, the Service has taken several actions to address mail processing and delivery deficiencies in the Antelope Valley. The Service added 2,417 square feet of interior space to the Palmdale MPO by relocating the post office into a larger leased facility. Some relief was provided to the cramped carrier operations at the Lancaster MPO by relocating 15 of the 89 carrier routes serving Lancaster to the Lancaster Cedar Station. However, as we observed on our visit to the Lancaster facilities, conditions in Lancaster were still very congested. Mail that was waiting to be processed and workroom operations spilled out of the building onto the platform, exposing both employees and the mail to weather conditions. In an effort to provide the Mojave MPO with more mail-processing space, a 2,400 square foot tent was installed in 1998, at a cost of $30,000, next to the loading platform. The tent provided additional space for processing operations and for holding mail that was waiting to be processed, but it did not allow for deployment of any automated equipment scheduled for use in the 935 mail-processing functions. Also, we observed that the tent would not provide adequate shelter from high winds or other weather-related conditions. Some of the equipment was stored at district warehouses. Although these efforts have allowed the district to continue to provide processing and delivery service, it is not clear how the Service intends to meet the operational processing and delivery deficiencies while decisions related to the proposed facility are pending. Project delays have also contributed to higher costs, incurred to repeat and update some of the analyses and cost data needed for final project approval. Given that the process is not completed, additional costs may be incurred to further update required analyses. The Service has incurred additional costs related to developing a second set of documents required for project approval, including Facility Planning Concept documents, appraisals, space requirements, environmental assessments, and DARs. Generally, the Service uses contractors to develop the environmental and engineering studies. Although the total cost of document preparation has not been quantified, available documentation indicates that the Service has incurred about $254,000 for costs related to previous design efforts for this project. In addition, costs that have not been quantified include staff time and travel costs associated with this project. The Area Office Operations Analyst who was responsible for preparing the DAR told us that it took him approximately a year to develop a DAR and the supporting documents and analysis. This did not include the time of the other individuals who provided him with various information needed to complete the analyses or the time of officials responsible for reviewing and approving the project. The Service has also incurred additional costs for travel associated with project reviews, such as the Planning Parameters Meeting, which involved the travel of at least three headquarters officials. It is difficult at this stage to determine what additional analyses may be needed because the Antelope Valley project has been suspended and, according to Service officials, no further action is being taken on reviewing the project until it is submitted by Pacific area officials for prioritization. We reviewed the cost estimates for the two alternatives that were included in the draft DAR that had been submitted to headquarters for review in February 1999. We found some deficiencies in the information presented. Postal officials stated that these types of deficiencies would be identified during their review process that includes reviews by officials in three separate headquarters departments—Facilities, Operations, and Finance. They also said that the cost estimates in the DAR were too preliminary to use as a basis for assessing which of the two alternatives under consideration were more cost effective. The officials noted that significant changes could be made to the cost estimates as the project documentation completes the review process. In addition, the Service has not realized any return on its investment in the site in Lancaster, which has remained unused since 1991. This unrealized investment has an interest cost associated with the Service’s use of funds to purchase the Lancaster site in October 1991. We estimated that the interest cost associated with the Service’s $6.5 million investment totaled about $2.9 million from the time that the site was purchased in October 1991 through June 1999 and that it would likely increase by over $300,000 each year. The uncertainty of this project over such a long period has also created difficulties, particularly related to business development planning, for the affected Lancaster and Mojave communities. Mojave community officials have raised concerns about the effect that relocating the postal operations would have on their community. They expressed specific concerns relating to the potential lost job opportunities to the Mojave and nearby California City residents and the impact that losing the postal processing operations would have on their effort to attract new homes and retail services. Postal documents indicated that while none of the Mojave employees would lose their jobs, approximately 80 employees working the evening and night shift would be relocated if distribution operations were to be relocated to a new facility in Lancaster. The Service projects that the proposed expanded Mojave Facility would create 10 additional jobs at the facility when it opens. The project delay has also affected the business development opportunities in Lancaster. After the Service selected the Lancaster site in 1991, the Mayor of Lancaster stated in a letter to the Postal Service that he welcomed the new facility and that the facility would anchor the new 160- acre Lancaster Business Park Project. Shortly after the Postal Service selected the 25-acre site, a major mailer, Deluxe Check Printing, acquired a 12-acre site adjacent to the postal property. Recently, the Lancaster City Manager noted that not having the Postal Service facility has made marketing the Business Park to potential developers very difficult. In addition, Lancaster officials stated that the city has spent over $20 million to provide improvements to the business park. These improvements were conditions of sale when the Postal Service acquired the site in 1991. The Service followed most of its key requirements when it purchased a site in Lancaster in 1991 for the proposed Antelope Valley project before it had obtained overall project approval, although some requirements were vague. One major exception was that the Headquarters CIC did not review and approve the proposed project justification and alternatives under consideration prior to advance site acquisition as required by Service guidance. The Service’s requirements for advance site acquisition were unclear because they did not specify the types or depth of analyses required. The Service’s analyses of alternatives were incomplete because estimated costs of the alternatives and space requirements were still under development. Also, it was not clear why an alternative that was recently under consideration, the expansion of the existing Mojave MPO, was not considered a viable alternative before the site in Lancaster was acquired. We could not determine whether review and approval of the proposed project justification and alternatives by the Headquarters CIC would have resulted in changes in the proposed project justification and alternatives or more in-depth analysis of the alternatives. Such a review may have prevented the unexplained inconsistencies in the classifications of this project that have contributed to delays in its funding. Likewise, it is not known whether the Committee’s review would have suggested a course of action other than acquisition of the Lancaster site. Further, the more recent analysis of the alternative to expand the Mojave MPO is too preliminary to assess or draw any conclusions from because the headquarters review of the proposed project has been suspended. However, what is known is that the Service spent about $6.5 million over 8 years ago to purchase a site that has remained unused. This site may or may not be used by the Service in the future, and its investment has a substantial annual interest cost associated with it. While this interest cost continues, the mail service deficiencies identified nearly 10 years ago remain unaddressed, and projected operating efficiencies and savings anticipated from new equipment are unrealized as the equipment remains in storage. Given this situation, it is not clear why the status of this project has been allowed to go unresolved for such a long time. It is also unclear at this time whether funding for this project will be approved and, if so, for what year of the next 5-year capital projects funding cycle. Thus, the Service’s site investment in unused land and the existing operational deficiencies are likely to continue for some time, and the Service has not determined how it will address these issues if the project is not approved or funded for several years. To address the long-standing uncertainties related to the proposed Antelope Valley project, we recommend that the Postmaster General take the following actions: Resolve the internal inconsistencies in the classification of this project, determine whether the site in Lancaster should be retained, and ensure that the project is considered in the appropriate funding and approval process, and Require the Pacific Area office to determine whether immediate action is needed to address the operational deficiencies identified in the Antelope Valley area and report on planned actions and related time frames for implementation. We received written comments from the Postmaster General on August 20, 1999. These comments are summarized below and included as appendix I. We also incorporated technical comments provided by Service officials into the report where appropriate. The Postmaster General responded to our conclusion that the Service did not follow all of its procedures in effect at the time that approval was given to purchase a site for a proposed facility in advance of the proposed Antelope Valley project’s review and approval. He stated that the Service has revised its procedures for advance site acquisition so that proposed sites are subjected to additional review and approval. As a result, he stated that the advanced acquisition of a site for project such as Antelope Valley now must receive approval from the Headquarters Capital Investment Committee and the Postmaster General. The Postmaster General generally agreed with our recommendations to address the unresolved status of the Antelope Valley project and the operational deficiencies in the Antelope Valley area. In response to our first recommendation to resolve the inconsistent classification of the project, he stated that the Service has determined that the proposed Antelope Valley project is properly classified as a mail processing facility. He also stated that the proposed project would be considered for funding along with other such projects during the next round of project review and prioritization. While clarification of the project’s classification is a good first step, until disposition of the entire project is completed, the status of the project, including the use of the Lancaster site, remains unresolved. Regarding our second recommendation to address operational deficiencies in the Antelope Valley area, he stated that officials from the involved Pacific Area offices have met to discuss the most workable alternatives to sustain and improve mail service for Antelope Valley customers. However, due to the complexity of issues, including the possibility of relocating some operations into leased space on an interim basis, a fully developed distribution and delivery improvement plan may take some time to implement. He agreed to provide us with action plans and time frames as they are finalized. If actions are taken as described by the Postmaster General, we believe they would be responsive to our recommendations. We are sending copies of this report to Representative Howard (Buck) McKeon; Representative John McHugh, Chairman, and Chaka Fattah, Ranking Minority Member, Subcommittee on the Postal Service, House Committee on Government Reform; Mr. William J. Henderson, Postmaster General; and other interested parties. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix II. If you have any questions about this report, please call me on (202) 512-8387. Teresa Anderson, Melvin Horne, Hazel Bailey, Joshua Bartzen, and Jill Sayre made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the project approval process the Postal Service used in proposing to relocate postal operations for the Antelope Valley, California, area from the Main Post Office in Mojave, California, to a new facility in Lancaster, California. GAO noted that: (1) the Service followed most of its key requirements for acquiring a site in Lancaster in 1991 prior to obtaining approval for the proposed Antelope Valley project, although some requirements were vague; (2) one major exception was that review and approval of the proposed project justification and alternatives by the Headquarters Capital Investment Committee did not take place prior to the advance site acquisition in Lancaster, as required by Service policies; (3) Service guidance was unclear because it required that alternatives be identified and analyzed before a project could qualify for advance site acquisition, but it did not clearly state the type or depth of analysis required; (4) at the time of the Lancaster site acquisition, the analysis to support the decision was incomplete; (5) more detailed analyses were still under development; (6) GAO could not determine from available documentation why the alternative to construct a new facility in Lancaster was preferred over other alternatives that had been proposed or why various alternatives were not considered viable; (7) the Lancaster site purchased for $6.5 million in 1991 has remained unused since that time due to the Service's failure to decide how and when it will resolve the long-standing problems that the proposed Antelope Valley project was to address; (8) continuing negative effects have resulted from the incomplete status of the project for almost 10 years; (9) project approval and funding of the project remain uncertain due to delays resulting from two suspensions, limits on capital spending, and changes in project classification; (10) it is unclear how the Service intends to address the space deficiencies that have contributed to operational processing and delivery deficiencies in the Antelope Valley area; (11) because of continued space deficiencies, automated equipment was sitting unused in warehouses, some mail delivery was being delayed, and the projected operating efficiencies and savings have not been realized; (12) the Service has invested $6.5 million in land that has been unused for nearly 8 years; such an investment has a substantial annual interest cost estimated at over $300,000; (13) it has also incurred additional costs to update documents required for project approval and may incur more costs if some of these documents again have to be updated when the project is reviewed for approval; and (14) the Lancaster and Mojave communities have faced uncertainty over business development opportunities as a result of the project delays.
The Department of State’s primary mission is to advise the President in the formulation and execution of foreign policy and to ensure the advancement and protection of U.S. interests abroad. The Department is also responsible for conducting consular operations, including visa services for foreign nationals; managing embassies and other real property—with a current estimated value of about $12 billion; and providing support services to at least 24 other federal agencies who have offices overseas. To meet these responsibilities, the Department must be able to (1) quickly and accurately, analyze and interpret political, economic, and societal events taking place all over the world, and (2) assess the potential effects of these events on the U.S. Complicating completion of these responsibilities is the current operating environment of shrinking budgets and reduced staffing. In this context, effective IRM is key to successful accomplishment of State’s critical missions. Twenty-one bureaus, as well as over 260 foreign posts and other offices, support State’s worldwide program and administrative responsibilities. By delegating responsibilities to the bureaus and offices, State has given each a significant amount of operational control for IRM. For example, many bureaus and offices have their own IRM staff, as well as budgetary authority, to independently undertake systems initiatives. Of the Department’s fiscal year 1994 total reported IRM expenditures—excluding salaries—58 percent, or approximately $149.1 million, was managed by State’s IRM office, while the remainder was allocated among the bureaus. The IRM office is responsible for guiding, coordinating, and providing technical support for the bureaus’ and offices’ IRM activities. The IRM office also is responsible for providing the infrastructure necessary for the bureaus and offices to achieve their individual IRM goals. State relies on a variety of information resources to help it carry out its responsibilities and support its decentralized operations. For instance, State has numerous systems to help with its consular activities, which include managing immigrant and nonimmigrant visas and preventing their issuance to terrorists, drug traffickers, and others who are not entitled to them. State also accounts for and controls its annual appropriation of about $5 billion on a reported 33 domestic and overseas financial systems and subsystems. Further, State has a variety of systems to help it account for and manage both its overseas real properties and over 25,000 full-time employees, here and abroad. Several federal agencies, including the Department of Defense, the United States Information Agency, and the Agency for International Development, also depend on information from State’s automated systems. In fiscal year 1994, State reported spending about $372 million on its IRM activities. State supports its systems on a variety of hardware platforms. Its corporate systems are operated on mainframe computers at data processing centers in the Washington, D.C. area and overseas. Domestic bureaus and overseas posts are also equipped to varying degrees with mini-computers and office automation equipment, which State purchased over a 15-year period almost exclusively from one vendor—Wang. Foreign Service Officers rely on this equipment for electronic mail, word processing, and other functions to develop reports and communicate information in support of State’s foreign policy objectives. Even though State relies on information and technology to meet its mission and business needs, its management of these resources has historically been poor. GAO, the General Services Administration, the Office of Management and Budget, and State’s Office of Inspector General have all reported broad IRM problems at State related to planning, budgeting, organization, acquisition, and information security. The reports also discussed problems in State’s financial management, property, and consular systems. The reports stated that because of these problems, managers often did not have the accurate, timely, integrated information they needed to meet administrative and foreign policy objectives. State too has recognized that it has many long-standing IRM problems. It reported a number of these material and high-risk weaknesses to the President and the Congress under provisions of the Federal Managers’ Financial Integrity Act (FMFIA) and its implementing guidance. These weaknesses and State’s efforts to address them include the following: In 1993, State reported that the Department relied heavily upon proprietary computer systems and associated software for all of its major applications (that is, finance, consular, personnel, and other administrative systems). State also reported that this Wang equipment was technically obsolete and prone to failure. The Department’s modernization initiative is aimed at replacing the Wang systems, reducing maintenance costs, and improving system reliability. Since 1987, State has reported that outdated technology and inadequate management controls and oversight of visa processing increased vulnerability to illegal immigration and diminished the integrity of the U.S. visa. State currently has an effort aimed at automating visa namechecking systems at all posts worldwide and eliminating out-dated microfiche systems that are currently at 72 posts. This effort is intended to reduce the risk of issuing visas to terrorists, drug traffickers, and others. Over the past decade, State had reported 42 material weaknesses and nonconformances in its core and subsidiary accounting systems. The Department manages six financial management systems worldwide. It has reported that its general ledger has never properly reflected the agency’s financial position. The Integrated Financial Management System initiative is intended to integrate State’s various financial management and related systems, providing managers with accurate and timely information to be used in making program decisions. State has reported for the past decade that the absence of backup capabilities for mainframe systems jeopardized the Department’s domestic information infrastructure in the event of an emergency. State has an effort underway to acquire mainframe backup to provide for processing if the mainframes at State’s data processing centers fail. Appendix II provides further details on these four initiatives and the problems they are intended to correct. To assess the adequacy of State’s current IRM program and improvement initiatives in meeting agency and business needs, we focused on a recent GAO report of 11 best IRM practices of leading public and private organizations. (See appendix III for a list of these best practices.) Using this report, as well as other federal IRM guidance, we identified management elements we believe to be critical and relevant to IRM success at State. These elements include top-level management commitment to improving IRM; a strategic IRM planning process that is based on mission and business needs and that integrates the planning and budgeting functions; an acquisition process in accordance with legal requirements and applicable policy guidance; and an organizational framework that includes leadership and authority for IRM, an executive-level review process to prioritize and oversee investment projects, and an IRM organization that provides adequate guidance and support for agencywide customers. To obtain information on State’s IRM program for evaluation against these management elements, we interviewed senior agency officials, IRM managers, technical personnel, and bureau representatives. We conducted our work between January 1994 and November 1994, in accordance with generally accepted government auditing standards. Appendix I provides further details on our scope and methodology. While the State Department depends on information to conduct its various missions, its management of information technology over the years has been poor. Problems have gone unresolved and managers have not had information when they need it to perform mission-critical and business needs. Moreover, improvement efforts focused on addressing these problems have not been successful, have taken too long, or have had only minimal impact on operations. Many of these problems are similar to ones we have seen throughout the federal government. We recently studied a number of leading private and public organizations to determine how they managed information resources to improve mission performance. We identified practices that, when used together, led to significant improvements in mission performance. These practices include top-level management recognizing the need to change and taking steps to ensure sustained commitment throughout the organization; establishing an outcome-oriented, integrated strategic information establishing organizationwide information management capabilities to ensure that information technology meets mission and business needs. A basic step toward improving information management is top executives recognizing that business as usual will not suffice and that the need to change is both real and urgent. Senior executives should (1) recognize the value of improving IRM, (2) evaluate IRM practices against those of leading organizations, and (3) dedicate themselves, and the organization, to improvement. Initiating and maintaining activities focused on rapid improvement requires investing in, identifying, and adopting new techniques, new processes, and new ways of doing business. The lack of top-level management commitment to improving IRM has long been a problem at State, as evidenced by the Department’s failure to resolve material, high risk, and other IRM weaknesses. Despite repeated criticisms from oversight agencies over the past decade, State has not had a sustained effort to improve IRM departmentwide. For example, the Department identified serious weaknesses in its financial and accounting systems over a decade ago that have not yet been corrected. These weaknesses include the general ledger not properly reflecting the agency’s financial position, deficiencies in data quality, and inadequate support of mission performance. Our recent report on the Integrated Financial Management System project, which is intended to correct these weaknesses, concluded that the project held a high risk of failure because of a lack of departmentwide IRM leadership and strategic planning. As a result, financial information that managers increasingly require to make informed program decisions in support of foreign policy objectives will continue to be inaccurate and untimely. Recently, however, the Under Secretary for Management, recognizing that effectively managing State’s information resources is critical for the Department to meet its various missions, initiated several efforts to address the Department’s information management problems. These efforts include clarifying the roles and responsibilities of senior officials to ensure that they fulfill federal requirements for IRM, developing a process to prioritize IRM acquisitions departmentwide, and establishing an advisory board of senior officials to provide leadership and oversight for IRM. The Under Secretary told us that these efforts are just first steps in resolving State’s many IRM shortcomings. These initial steps are critical to helping resolve State’s information management problems; still, State needs to maintain the momentum for change by obtaining commitment from senior managers in key program and support areas to continue institutionalizing improvements. Such support will require State to (1) analyze current performance problems and determine how information management solutions can address these problems and (2) educate line managers about how strategic information management can improve mission effectiveness. As the need to fundamentally change is recognized and managers throughout the organization begin to understand their responsibility for change, the organization can begin to focus on an integrated, strategic information management process. Key tenets of such a process include developing a strategic planning process based on mission and business needs, and integrating the planning and budgeting functions. Additionally, the organization should ensure that information resource procurements and contracts are performed in accordance with legal requirements and applicable policy guidance. A basic step in an integrated information management process is building a departmentwide strategic planning process that is anchored to an agencywide business plan that specifies mission goals and objectives. Such a planning process includes (1) identifying the agency’s mission goals and objectives and (2) developing an IRM plan that supports these goals and objectives. State has not yet developed such a strategic IRM planning process. State does not have a departmentwide plan specifying mission, goals, objectives, and priorities, although program planning guidance provides limited information on these. Department officials agreed that a clear statement of mission goals, objectives, and priorities would help them in their IRM planning efforts. The 1994 strategic IRM plan—the first issued since 1991—was developed within the IRM office with comments from the bureaus and is largely a description of numerous information technology projects. The plan does not prioritize State’s numerous IRM initiatives—including office automation, overseas telephone system replacement, overseas telecommunications service, and the integrated financial management system projects—and, thus, cannot guide executive and operational decisions. Such prioritization is essential because funding may not be available for all initiatives. Recently, the Under Secretary for Management began focusing attention on improving agencywide program planning. As previously mentioned, the Under Secretary established an advisory board of senior officials whose first task is to develop an IRM vision that provides direct support to the Department mission. The Under Secretary is also considering establishing a new process for linking program, IRM, and other planning processes. Officials in the Bureau of Finance and Management Policy stated that the support of other Under Secretaries will be necessary to ensure departmentwide attention to program planning processes, because historically, planning has not been a focus in State’s culture. As one agency report stated, “... it is a rare Department officer who is able to do much more than cope with today’s crises and issues.” This report further states that the Department needs to significantly increase its strategic planning efforts, recognizing that if State does not know where it wants to go, as well as the options for getting there, it will not do well in the post Cold War era. In conjunction with focusing on mission and business goals, successful organizations integrate the planning and budgeting processes. This reinforces the linkage of IRM initiatives to the agency’s mission, provides tight controls during implementation, and helps ensure that projects stay on track. This also helps ensure that budgeting does not become reactive to priorities of the moment that have not been adequately weighed against those of the future, and that plans do not become mere paper exercises. The IRM planning and budgeting processes have not been linked at State. For example, bureau IRM budgets are not developed out of a departmentwide IRM planning process. Instead bureau IRM budgets have been developed by the bureaus and reviewed (along with other budgetary items) and approved by the Chief Financial Officer—without the involvement of the Designated Senior Official for IRM or a departmentwide IRM board. Thus, State has not had a means to analyze or eliminate duplication in IRM initiatives and funding. State has also not had a mechanism to ensure adequate funding for initiatives to address long-standing IRM problems. Projects are funded at a level sufficient to plan them, but not to implement them, according to senior IRM officials. These officials stated that this is a primary reason why several large projects—including replacement of proprietary, obsolescent mini-computers and office automation equipment in State’s domestic bureaus and overseas posts—have made little progress. (See appendix II for details on this systems modernization effort.) According to a March 1994 memo from the Assistant Secretary for Administration, although the IRM support office lacked the necessary modernization funding, individual bureaus and offices—other than the IRM office—expended $68 million on office automation items. Without a departmentwide, integrated, IRM planning and budgeting process, the Department could not ensure that the $68 million was directed towards State’s highest priorities. The memo further stated that such a planning process is critical to eliminating the duplication and waste inevitable in the current approach, and that the absence of this process results in bureaus independently implementing modernization plans in accordance with their own priorities and resources. Slow progress in modernizing systems has been accompanied by difficulty in supporting and maintaining older technology and increased vulnerability to computer failures. The cost of supporting obsolete, proprietary office automation equipment has been high—about $12 million in fiscal year 1994, according to an IRM official. State officials also said that foreign affairs operations have been affected by computer failures. For example, in January 1994, the Bureau of Near Eastern Affairs experienced failures of old Wang disk drives during 5 of the 10 days of preparation prior to the Secretary’s negotiations in the Middle East. The failures resulted in delays and difficulty in providing briefings to the Executive Secretariat. Systems were down for hours at a time and reports that were needed to prepare for the negotiations had to be recreated because files were deleted or could not be accessed. The old disk drives ultimately had to be replaced with new equipment to adequately support bureau operations. The lack of an integrated IRM planning and budgeting function has also resulted in long-standing weaknesses related to backup for the mainframe systems. State has reported inadequate backup as a high risk weakness under FMFIA for about 10 years. However, such backup has not been provided because of various funding shortfalls. For example, several classified systems in Washington, D.C. do not have backup. One classified system without back-up is the telegraphic retrieval system. This system allows for search and retrieval of all cables over the past 20 years. Such a system is important to users who rely on search and retrieval for important time-critical research, such as identifying groups who may be responsible for terrorist acts under investigation. In 1993, State began an effort to better integrate the planning and budgeting functions. The IRM office initiated a departmentwide planning process in which bureau representatives met in separate groups—regional, policy, and management bureaus—to determine spending priorities. This effort represents an improvement from the past in that it (1) relied on decision criteria based on mission benefits and (2) brought together bureau representatives to communicate priorities and needs. However, this process is evolutionary and has not yet been institutionalized as an integrated, departmentwide process for allocating all State IRM funds. The Federal Acquisition Regulation requires federal agencies to develop acquisition plans to obtain the maximum extent of full and open competition in fulfilling agency needs. The purpose of these plans is to ensure that agencies meet their needs in the most effective, economical, and timely manner. Historically, however, State has not conducted adequate planning and management to meet these goals in its acquisition of information technology. About one-half of State’s Delegations of Procurement Authority (DPAs) for information technology acquisitions are sole source. In 1992 the General Services Administration (GSA) lowered the thresholds in State’s DPA—that allowed State to make IRM purchases without GSA’s prior approval—because of these procurement problems. For example, State’s general authority to award IRM contracts was lowered from $2.5 million to $1.5 million for competitive procurements. State’s acquisition problems include the failure to adequately track DPAs and request DPAs for contract extensions sufficiently in advance of the contract expiration date. Between 1991 and 1993, about half of State’s requests for DPAs to execute contract extensions were sent to GSA less than a month before the expiration of each contract. For example, in March 1993, State requested a DPA for a contract extension 5 days before a contract for maintenance of State’s Foreign Affairs Data Processing Center was set to expire. State noted in its request to GSA that, without the extension, the Department would have to shut down operations at its Beltsville data processing site and reduce operations at its headquarters site, with an “almost catastrophic effect on the Department’s ability to conduct business.” To prevent this outcome, the contract has been extended twice since March 1993. The December 1993 DPA for an extension was given on the condition that State develop a management plan for the acquisition. State has established a Major Acquisition Program Office within the IRM office to address major acquisition weaknesses. This office has developed a set of new policies and procedures, currently under review by acquisition and IRM officials, for planning major acquisitions. Further, the IRM office has an ongoing review of acquisition management problems, although it has not yet determined how the problems should be addressed. Successful organizations we studied in developing our executive guide on best practices established effective organizational frameworks to provide IRM direction and focus. Such frameworks included positioning a Chief Information Officer (CIO) to provide IRM leadership and authority; establishing an executive-level investment review board to prioritize projects and oversee the organization’s various IRM activities; and ensuring that the agency’s IRM organization provides adequate guidance and support for its agencywide customers. A CIO positioned as a senior management partner can serve as a bridge between top management, line managers, and information support professionals. This includes clearly articulating the critical role information management plays in mission improvement and focusing and advising senior executives on high-value IRM issues, decisions, and investments. Appointing a CIO will not, in itself, resolve problems or lead to improved mission capabilities. The CIO should have the authority to ensure implementation of IRM initiatives and agencywide compliance with approved IRM standards. State has a Designated Senior Official (DSO) for IRM, rather than a CIO. However, because of his position and other responsibilities, State’s DSO has not provided adequate leadership for IRM. The DSO is positioned several levels down within State’s hierarchy and reports to the Under Secretary for Management, whose involvement in IRM has traditionally been limited. The DSO, who is the Assistant Secretary of State for the Bureau of Administration, also has a range of other responsibilities, including all administrative functions of the Department and managing the Foreign Buildings Operations. Finally, the DSO is at the same organizational level as the other bureau chiefs. Without a senior IRM official, State has also not had anyone with the authority to ensure agencywide compliance with any IRM guidance or standards that might be approved. For example, because the DSO is equivalent to other bureau heads, the DSO cannot ensure departmentwide compliance with data standards in an effort to institute a departmentwide data administration program. Further, the DSO has no means of ensuring compliance with departmentwide computer or telecommunications standards supporting the current systems modernization effort. The Under Secretary for Management stated that he is acting as the CIO under the current management structure. He believes that it is his responsibility to create the environment and relationships necessary to effectively manage information resources. We agree that his IRM role is critical. However, we are concerned that leaving the CIO as an ad hoc position will not ensure that the processes needed to effect lasting IRM improvements will be institutionalized. A departmentwide process for selecting and reviewing investments is needed to effectively carry out IRM improvement efforts. Such a process would involve an investment review board, with significant control over decisions and balanced representation from key program and support areas. Traditionally, IRM projects have been thought of as individual information technology expenses. The leading organizations we studied, however, began to think of information systems projects, not as one-time expenses, but rather as investments to improve mission performance. They instituted review boards with responsibility for controlling budgets and selecting, funding, integrating, and reviewing information management projects to ensure that they meet agencywide mission and business objectives. Thinking of projects as investments helped to concentrate top management’s attention on measuring the mission benefits, risks, and costs of individual projects. It also helped managers evaluate the tradeoffs between continuing to fund existing operations and developing new performance capabilities. In an effort to institute a more departmentwide focus to agency IRM, the Under Secretary for Management recently established an IRM board of senior State officials. The board, which has met a few times, was established to develop an IRM vision from the Department’s strategic plan; approve the IRM strategic plan; review IRM programs to ensure that program, policy, and acquisition requirements are met; and approve and prioritize IRM acquisitions to be presented to the Under Secretary for Management. It is too early to determine whether the board has sufficient control over key decisions or whether its authority should be increased beyond that of advising the Under Secretary for Management. In addition, State’s board lacks sufficient representation from regional and functional bureaus to ensure that mission-critical information needs receive adequate priority. The board has 11 members of which only 3 represent mission-critical areas. Thus, the majority of the 21 bureaus are not represented on the board. The other eight members of the board represent support areas, including four representatives from the Bureau of Administration, two representatives from the Bureau of Finance and Management Policy, one representative from the Bureau of Diplomatic Security, and the Deputy Legal Adviser. If the board is given sufficient oversight over IRM improvement efforts, it could play an important role in ensuring that projects are completed successfully. This is particularly important at State because periodic Foreign Service Officer rotations hinder managers from seeing projects through to completion. For example, the highest level IRM office employee devoted full-time to the modernization effort has changed five times in the past few years. The board could also be an important vehicle for ensuring that important projects, such as data administration, are adequately funded and implemented agencywide. In the past, this has not occurred. For example, the data administration program is intended to support the modernization effort and address fundamental technical inefficiencies that have resulted from State’s decentralized organization and mission and business operations. With posts all over the world managing their own specialized programs and functions, the Department has become reliant on separate systems environments for various overseas and domestic operations. Redundant and incompatible systems operating within these environments produce inconsistent, inaccurate, and untimely data that hamper departmental decision making, according to a State report. The report further states that bureaus spend a considerable amount of time reconciling data delivered by other bureaus. Data administration is needed to ensure that common, integrated data and information support business and program operations. According to IRM officials, however, bureaus (other than the Bureau for Financial and Management Policy) have only demonstrated a token interest in data administration. In addition, the program has not had an official charter, mission, and permanent staff. On several occasions, the data administration program ran out of funds. At one point, the Bureau of Finance and Management Policy provided some of its own operational funds to keep the project going to meet bureau needs. The Office of the Under Secretary for Management recently drafted proposals to begin to bring together IRM planning and budgeting processes; however, State officials said that agencywide commitment will be needed to implement these initiatives. In addition, as previously mentioned, State began in 1993 to hold separate meetings for representatives from the regional, policy, and management bureaus to establish agencywide spending priorities and make decisions on investments in line with mission and business objectives. These are all steps in the right direction; however, it is too early to determine what final impact they will have. One of the basic responsibilities of an agency’s IRM support organization is to provide organizationwide guidance on the management of information resources. Increasingly, IRM support organizations are also called upon to provide information and technical architectures and standards to guide the management and acquisition of information resources. State’s IRM organization, however, has not provided adequate guidance describing how State’s various information resources should be managed. For example, the guidance that the IRM office has provided does not address issues such as strategic IRM planning, management of major acquisitions, or conducting IRM evaluations in accordance with federal requirements. Policy officials are currently revising the guidance to reflect departmental changes, reduce its length, and ensure compliance with federal regulations. The revisions are expected to be completed in 1995. The IRM office also has not provided an infrastructure within which to effectively manage information resources. Specifically, State has not developed an enterprisewide information architecture that identifies the information that is needed to achieve mission objectives and defines how information systems will be integrated through common standards to satisfy those objectives. Senior IRM officials recognized that an information architecture was needed, but said that a project to develop one will not be initiated for another year or two. The IRM office is currently working to institute a technical architecture as part of its systems modernization program. The technical architecture is to provide a set of standards and specifications, describing the basic features necessary to permit a wide variety of platforms to interoperate at all of State’s posts and offices worldwide. However, planning for the systems modernization program is based on inadequate supporting analysis. Specifically, State has not identified agencywide information and functional requirements in planning for systems modernization. Instead, State has unnecessarily limited its modernization options by focusing on technology solutions. For example, the Department selected Microsoft Windows as its systems environment at the desktop level. In conducting a requirements survey, the IRM officials asked users whether they needed Windows—ignoring other desktop platforms, such as Macintosh and OS/2. As a result, State does not know whether Windows is the most appropriate system environment for meeting users’ needs. With shrinking budgets and reduced staffing, the Department of State is facing new challenges in fulfilling its worldwide responsibilities. Meeting these challenges will require State to increase the effectiveness and efficiency of its mission and business operations, including consular affairs operations aimed at reducing visa fraud and financial management operations aimed at improving financial statements. How successful State is will depend, at least in part, on how well the Department manages its information resources. Although the Department spends hundreds of millions of dollars on IRM activities annually, it continues to be plagued by long-standing IRM problems. As a result of its failure to follow the best IRM practices, major IRM improvement initiatives remain at risk of failure. Specifically, because IRM planning and budgeting processes have not been linked, initiatives to modernize office automation equipment have made little progress and backup for some mainframes is still lacking. These initiatives have been funded at levels sufficient to plan them, but not fully implement them. While State has recently begun a departmentwide investment review board, the board lacks adequate representation from regional and functional bureaus to ensure adequate support for mission-critical information needs. To resolve its long-standing problems, State must follow the example set by leading organizations and adopt a more strategic approach to information management. Such an approach includes (1) a departmentwide commitment to change, (2) an integrated management process, and (3) an organizationwide information management capability to address mission and business needs. The Under Secretary for Management has initiated efforts to promote change and revise management processes and organizational structures. These are important first steps. However, more action should be taken to sustain and support these efforts. Managers throughout the agency must begin to work together to identify and address information management weaknesses. State must also assess and prioritize its mission and business needs and begin to focus on those projects that are most needed across the Department. Only by taking an agencywide focus will State be able to make substantive progress and break from its history of poor information management. To institute modern information resources management practices in support of departmentwide mission and business needs, we recommend that the Secretary of State designate a Chief Information Officer, above the Assistant Secretary level, with the authority necessary to oversee the implementation of departmentwide IRM initiatives and standards, and strengthen the recently established new IRM investment review board by (1) increasing regional and functional bureau representation and (2) ensuring that the board’s determinations are implemented. We also recommend that the Chief Information Officer, in conjunction with participants from the IRM investment review board, ensure development of an agency business plan specifying mission goals, objectives, and priorities to provide a sound basis for IRM planning and business process improvements; integrate IRM planning with budgeting and other related processes; ensure that the IRM organization (1) issues adequate guidance to govern agencywide IRM, including the areas of strategic planning and acquisition, and (2) develops information and technical architectures and standards to ensure integration of data and systems; and require periodic evaluations of State’s IRM practices against those of leading organizations and implement necessary improvements to continually strengthen practices. As requested, we did not obtain written comments on a draft of this report. However, we discussed the results of our work with the Under Secretary for Management and senior IRM officials, who generally agreed with the information presented. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies to the Secretary of State, other interested congressional committees, and the Director of the Office of Management and Budget. Copies will also be sent to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions. Other major contributors are listed in appendix IV. To address our objective, we focused on a recent GAO report on the best practices of leading public and private organizations, and reviewed legislation, federal guidance, and other IRM criteria. On the basis of this criteria, we identified elements we believe to be critical and relevant to IRM success at State. These elements include adequate leadership and authority for IRM, and strategic IRM planning based on the agency’s mission and business needs. To obtain background information on the long-standing IRM problems at State, we interviewed and collected reports from officials at the General Services Administration, the Office of Management and Budget, and State’s Office of Inspector General. We reviewed internal reports and evaluations from State to gain the agency’s perspective on its IRM program. Further, we interviewed State officials and observed operations at the Foreign Affairs Data Processing Center, the Communications Center at State headquarters, and the Information Management Center in Beltsville, Maryland. To assess State’s organizational structure, we consulted various offices departmentwide. Specifically, we interviewed senior State officials (including the Under Secretary for Management, the Assistant Secretary for Administration, and the Deputy Assistant Secretary for Information Management), as well as other IRM representatives to gain their perspectives on IRM needs and challenges, and corresponding initiatives to address them. Further, we analyzed documents and interviewed representatives from Consular Affairs, Finance and Management Policy, Diplomatic Security, and the regional bureaus to learn about the bureaus’ IRM activities, support from and coordination with the IRM office, and whether or not bureau information and technology needs are adequately met. To evaluate State’s IRM planning, we reviewed plans and supporting documentation and discussed IRM planning processes with relevant IRM officials. We observed newly instituted integrated planning sessions in which users work together to prioritize their technology needs and develop an IRM spending plan. We interviewed program planning officials concerning the link between program and IRM planning and the need to develop a departmentwide business plan. Additionally, we obtained information on forums established to coordinate IRM activities and initiatives agencywide. To assess State’s ongoing IRM improvement efforts, we reviewed and analyzed modernization plans and supporting documents and interviewed relevant IRM office, Diplomatic Telecommunications Service Program Office, and other bureau officials. We consulted with officials from the National Institute of Standards and Technology to gather information on approaches to establishing open system environments. We performed our work at State headquarters offices in the Washington, D.C., area. As requested by your office, we did not obtain written comments on a draft of this report. However, we discussed the results of our work with the Under Secretary for Management and senior IRM officials, who generally agreed with the information presented. State has a number of weaknesses that it has reported over the past decade as high risks under FMFIA and its implementing guidance. These weaknesses include (1) reliance on obsolete proprietary equipment that is increasingly vulnerable to failure and rising maintenance costs, (2) use of out-dated microfiche to check the names of terrorists, narcotics traffickers, and others prior to the issuance of visas, (3) inaccurate and untimely financial information to support program decisions, and (4) lack of backup capabilities for mainframe computers. The Department has a number of initiatives aimed at addressing these weaknesses. State’s domestic bureaus and overseas posts are equipped to varying degrees with mini-computers and office automation equipment, which State purchased over a 15-year period almost exclusively from one vendor. Now this equipment is obsolescent and, in many cases, costly to maintain. According to one Department report, 92 percent of State’s unclassified office automation equipment and 72 percent of its domestic equipment fit the Federal Information Resources Management Regulation definition of obsolete. In addition, the IRM office reported that maintenance costs were about $12 million in fiscal year 1994. State has consequently embarked on a program to modernize its aging information technology infrastructure. This program, which began in 1992 and is managed by the IRM office, is aimed at replacing State’s proprietary hardware and software systems with an open systems environment. State estimates that the program will cost about $530 million from fiscal year 1994 through 1998. The main goals for the overall modernization program, identified in State’s March 1994 Open Systems Migration Implementation Plan, are to reduce dependency on proprietary architectures throughout the Department, move new and existing systems to a modern, open, technical environment, and improve support of State’s business functions. At least 228 of State’s more than 260 embassies and posts conduct consular operations overseas. These consular operations include processing visas for foreign nationals and providing passport services for U.S. citizens. Of these 228 posts, only 110 have an automated namechecking system that is on-line to a central database at State headquarters. Forty-six of the posts rely on a system known as the Distributed Name Checking System, that uses magnetic tape and compact disk-read only memory (CD-ROM) files. One consular official told us that these files are about 6 weeks out-of-date. Finally, 72 posts rely on microfiche that are several months out-of-date and are so time-consuming and difficult to use that consular staff may not check for ineligible applicants prior to issuing a visa. The 72 posts that do not have any automated namechecking capability unnecessarily risk issuing visas to persons who could engage in activities that endanger the welfare and security of United States citizens. State’s Inspector General testified before the Congress in July 1993 that IRM and procedural shortfalls helped facilitate the issuance of at least 3 visas to Sheik Abdel Rahman, indicted in the February 1993 World Trade Center bombing, that killed 6 people, injured more than 1,000 others, and caused damage estimated at more than a half billion dollars. The Inspector General testified that the first two visas were issued because the Sheik’s name was not added to the namechecking system until 7 years after it should have been. In 1990, although his name had been added to the system, the Khartoum post issued a visa to the Sheik without checking the microfiche namecheck system. According to the Inspector General, because the microfiche system is so time-consuming and cumbersome, there are probably numerous occasions throughout the world where the microfiche is not being checked as required. The Foreign Relations Authorization Act for fiscal years 1994 and 1995 mandates that all posts have automated namechecking systems by October 30, 1995. State officials were uncertain whether it will meet the deadline due to a number of possible hindrances cited in the Bureau program plan. These hindrances include the following: (1) the ability to complete procurements in a timely manner, (2) failure of the IRM office and other agencies to provide the infrastructure to support installation, and (3) insufficient resources and/or facilities for posts to physically collect and process funds. State is currently developing the Integrated Financial Management System (IFMS), which is intended to link State’s worldwide operations and provide managers at all levels with reliable financial information to plan and conduct operations in conformance with governmentwide requirements. The system is expected to partially address weaknesses in management and accountability of real and personal property, worldwide disbursing and cashiering, and payroll transactions. The Department has identified such weaknesses as high-risk areas for the past 3 years in its annual FMFIA reports to the President and the Congress. We reported in August 1994, however, that State’s efforts to plan and manage the IFMS initiative have not been adequate, increasing the risk that the system will not resolve long-standing financial management weaknesses or meet managers’ future information needs. Specifically, we reported that State did not have any documentation that described the anticipated financial management structure, how various subsidiary systems will integrate with this structure, or how IFMS is related to State’s other long-term improvement efforts. We reported that State also had not identified all existing financial management systems and subsystems to be enhanced or maintained in the improvement project. We concluded that without in-depth knowledge of the current financial accounting and management environment and a fully articulated target structure, it will be very difficult for State to improve its processes or correct weaknesses. State has reported the lack of critical ADP safeguards, such as backup capability, for its mainframe systems since 1984. One mainframe lacking backup supports agencywide, classified functions at the headquarters Foreign Affairs Data Processing Center. One system on this mainframe—the telegraphic retrieval system—is especially important because the system allows for search and retrieval of all cables over the past 20 years. This system is important to users, such as the Ambassador at Large for Counter-Terrorism, who rely on search and retrieval for important time-critical research. For example, the system was recently queried to assist in the identification of terrorist groups who may be responsible for terrorist acts under investigation. State recently installed a new mainframe at the Foreign Affairs Data Processing Center at State headquarters. State expects this mainframe to provide backup capabilities for unclassified information systems at its Beltsville Information Management Center by the end of 1994. Initiate, mandate, and facilitate major changes in information management to improve performance. Practice 1: Recognize and communicate the urgency to change information management practices. Practice 2: Get line management involved and create ownership. Practice 3: Take action and maintain momentum. Establish an outcome-oriented, integrated strategic information management process. Practice 4: Anchor strategic planning in customer needs and mission goals. Practice 5: Measure the performance of key mission delivery processes. Practice 6: Focus on process improvement in the context of an architecture. Practice 7: Manage information systems projects as investments. Practice 8: Integrate the planning, budgeting, and evaluation processes. Build organizationwide information management capabilities to address mission needs. Practice 9: Establish customer/supplier relationships between line and information management professionals. Practice 10: Position a Chief Information Officer as a senior management partner. Practice 11: Upgrade skills and knowledge of line and information management professionals. Financial Management: State’s Systems Planning Needs to Focus on Correcting Long-standing Problems (GAO/AIMD-94-141, August 12, 1994). Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994). Financial Management: Serious Deficiencies in State’s Financial Systems Require Sustained Attention (GAO/AFMD-93-9, November 13, 1992). Management of Overseas Real Property (GAO/HR-93-15, December 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of State's information resources management (IRM) program and ongoing IRM improvement efforts. GAO found that: (1) State has poorly managed its information resources and continues to use inadequate and obsolete information technology, which has resulted in critical information shortfalls and interrupted operations; (2) although State has a number of initiatives to improve IRM, its failure to follow the best IRM practices and commit top-level management to IRM jeopardizes its improvement efforts; (3) State lacks an adequate mechanism to prevent IRM duplication and ensure sufficient funding; (4) State needs a chief information officer (CIO) to provide leadership and guidance for IRM and an investment and oversight process involving senior regional and functional bureau managers; and (5) State needs to address long-standing fundamental barriers to effective IRM and commit to a departmentwide IRM approach to meet its critical mission and business functions.
CBP’s SBI program is to leverage technology, tactical infrastructure, and people to allow CBP agents to gain control of the nation’s borders. Within SBI, SBInet is the program for acquiring, developing, integrating, and deploying an appropriate mix of surveillance technologies and command, control, communications, and intelligence (C3I) technologies. The surveillance technologies are to include a variety of sensor systems aimed at improving CBP’s ability to detect, identify, classify, and track items of interest along the borders. Unattended ground sensors are to be used to detect heat and vibrations associated with foot traffic and metal associated with vehicles. Radars mounted on fixed and mobile towers are to detect movement, and cameras on fixed and mobile towers are to be used to identify, classify, and track items of interest detected by the ground sensors and the radars. Aerial assets are also to be used to provide video and infrared imaging to enhance tracking of targets. The C3I technologies are to include software and hardware to produce a Common Operating Picture (COP)—a uniform presentation of activities within specific areas along the border. The sensors, radars, and cameras are to gather information along the border, and the system is to transmit this information to the COP terminals located in command centers and agent vehicles, assembling this information to provide CBP agents with border situational awareness. A system life cycle management approach typically consists of a series of phases, milestone reviews, and related processes to guide the acquisition, development, deployment, and operation and maintenance of a system. The phases, reviews, and processes cover such important life cycle activities as requirements development and management, design, software development, and testing. In general, SBInet surveillance systems are to be acquired through the purchase of commercially available products, while the COP systems involve development of new, customized systems and software. Together, both categories are to form a deployable increment of SBInet capabilities, which the program office refers to as a “block.” Each block is to include a release or version of the COP. The border area that receives a given block is referred to as a “project.” Among the key processes provided for in the SBInet system life cycle management approach are processes for developing and managing requirements and for managing testing activities. SBInet requirements are to consist of a hierarchy of six types of requirements, with the high-level operational requirements at the top. These high-level requirements are to be decomposed into lower-level, more detailed system, component, design, software, and project requirements. SBInet testing consists of a sequence of tests that are intended first to verify that individual system parts meet specified requirements, and then verify that these combined parts perform as intended as an integrated and operational system. Having a decomposed hierarchy of requirements and an incremental approach to testing are both characteristics of complex information technology (IT) projects. Important aspects of SBInet—the scope, schedule, and development and deployment approach—remain ambiguous and in a continued state of flux, making it unclear and uncertain what technology capabilities will be delivered and when, where, and how they will be delivered. For example, the scope and timing of planned SBInet deployments and capabilities have continued to change since the program began, and remain unclear. Further, the approach that is being used to define, develop, acquire, test, and deploy SBInet is similarly unclear and has continued to change. The absence of clarity and stability in these key aspects of SBInet introduces considerable program risks, hampers DHS’s ability to measure program progress, and impairs the ability of Congress to oversee the program and hold DHS accountable for program results. The scope and timing of planned SBInet deployments and capabilities have not been clearly established, but rather have continued to change since the program began. Specifically, as of December 2006, the SBInet System Program Office planned to deploy an “initial” set of capabilities along the entire southwest border by late 2008 and a “full” set of operational capabilities along the southern and northern borders (a total of about 6,000 miles) by late 2009. Since then, however, the program office has modified its plans multiple times. As of March 2008, it planned to deploy SBInet capabilities to just three out of nine sectors along the southwest border—Tucson Sector by 2009, Yuma Sector by 2010, and El Paso Sector by 2011. According to program officials, no deployment dates had been established for the remainder of the southwest or northern borders. At the same time, the SBInet System Program Office committed to deploying Block 1 technologies to two locations within the Tucson Sector by the end of 2008, known as Tucson 1 and Ajo 1. However, as of late July 2008, program officials reported that the deployment schedule for these two sites has been modified, and they will not be operational until “sometime” in 2009. The slippages in the dates for the first two Tucson deployments, according to a program official, will, in turn, delay subsequent Tucson deployments, although revised dates for these subsequent deployments have not been set. In addition, the current Block 1 design does not provide key capabilities that are in requirements documents and were anticipated to be part of the Block 1 deployments to Tucson 1 and Ajo 1. For example, the first deployments of Block 1 will not be capable of providing COP information to the agent vehicles. Without clearly establishing program commitments, such as capabilities to be deployed and when and where they are to be deployed, program progress cannot be measured and responsible parties cannot be held accountable. Another key aspect of successfully managing large programs like SBInet is having a schedule that defines the sequence and timing of key activities and events and is realistic, achievable, and minimizes program risks. However, the timing and sequencing of the work, activities, and events that need to occur to meet existing program commitments are also unclear. Specifically, the program office does not yet have an approved integrated master schedule to guide the execution of SBInet. Moreover, our assimilation of available information from multiple program sources indicates that the schedule has continued to change. Program officials attributed these schedule changes to the lack of a satisfactory system-level design, turnover in the contractor’s workforce, including three different program managers and three different lead system engineers, and attrition in the SBInet Program Office, including turnover in the SBInet Program Manager position. Without stability and certainty in the program’s schedule, program cost and schedule risks increase, and meaningful measurement and oversight of program status and progress cannot occur, in turn limiting accountability for results. System quality and performance are in large part governed by the approach and processes followed in developing and acquiring the system. The approach and processes should be fully documented so that they can be understood and properly implemented by those responsible for doing so, thus increasing the chances of delivering promised system capabilities and benefits on time and within budget. The life cycle management approach and processes being used by the SBInet System Program Office to manage the definition, design, development, testing, and deployment of system capabilities has not been fully and clearly documented. Rather, what is defined in various program documents is limited and not fully consistent across these documents. For example, officials have stated that they are using the draft Systems Engineering Plan, dated February 2008, to guide the design, development, and deployment of system capabilities, and the draft Test and Evaluation Master Plan, dated May 2008, to guide the testing process, but both of these documents appear to lack sufficient information to clearly guide system activities. For example, the Systems Engineering Plan includes a diagram of the engineering process, but the steps of the process and the gate reviews are not defined or described in the text of the document. Further, statements by program officials responsible for system development and testing activities, as well as briefing materials and diagrams that these officials provided, did not add sufficient clarity to describe a well-defined life cycle management approach. Program officials told us that both the government and contractor staff understand the SBInet life cycle management approach and related engineering processes through the combination of the draft Systems Engineering Plan and government-contractor interactions during design meetings. Nevertheless, they acknowledged that the approach and processes are not well documented, citing a lack of sufficient staff to both document the processes and oversee the system’s design, development, testing, and deployment. They also told us that they are adding new people to the program office with different acquisition backgrounds, and they are still learning about, evolving, and improving the approach and processes. The lack of definition and stability in the approach and related processes being used to define, design, develop, acquire, test, and deploy SBInet introduces considerable risk that both the program officials and contractor staff will not understand what needs to be done when, and that the system will not meet operational needs and perform as intended. DHS has not effectively defined and managed SBInet requirements. While the program office recently issued guidance that is consistent with recognized leading practices, this guidance was not finalized until February 2008, and thus was not used in performing a number of key requirements-related activities. In the absence of well-defined guidance, the program’s efforts to effectively define and manage requirements have been mixed. For example, the program has taken credible steps to include users in the definition of requirements. However, several requirements definition and management limitations exist. One of the leading practices associated with effective requirements development and management is engaging system users early and continuously. In developing the operational requirements, the System Program Office involved SBInet users in a manner consistent with leading practices. Specifically, it conducted requirements-gathering workshops from October 2006 through April 2007 to ascertain the needs of Border Patrol agents and established work groups in September 2007 to solicit input from both the Office of Air and Marine Operations and the Office of Field Operations. Further, the program office is developing the COP technology in a way that allows end users to be directly involved in software development activities, which permits solutions to be tailored to their needs. Such efforts increase the chances of developing a system that will successfully meet those needs. The creation of a requirements baseline establishes a set of requirements that have been formally reviewed and agreed on, and thus serve as the basis for further development or delivery. According to SBInet program officials, the SBInet Requirements Development and Management Plan, and leading practices, requirements should be baselined before key system design activities begin in order to inform, guide, and constrain the system’s design. While many SBInet requirements have been baselined, two types have not yet been baselined. According to the System Program Office, the operational requirements, system requirements, and various system component requirements have been baselined. However, as of July 2008, the program office had not baselined its COP software requirements and its project-level requirements for the Tucson Sector, which includes Tucson 1 and Ajo 1. According to program officials the COP requirements have not been baselined because certain interface requirements had not yet been completely identified and defined. Despite the absence of baselined COP and project-level requirements, the program office has proceeded with development, integration, and testing activities for the Block 1 capabilities to be delivered to Tucson 1 and Ajo l. As a result, it faces an increased risk of deploying systems that do not align well with requirements, and thus may require subsequent rework. Another leading practice associated with developing and managing requirements is maintaining bidirectional traceability from high-level operational requirements through detailed low-level requirements to test cases. The SBInet Requirements Development and Management Plan recognizes the importance of traceability, and the SBInet System Program Office established detailed guidance for populating and maintaining a requirements database for maintaining linkages among requirement levels and test verification methods. To provide for requirements traceability, the prime contractor established such a requirements management database. However, the reliability of the database is questionable. We attempted to trace requirements in the version of this database that the program office received in March 2008, and were unable to trace large percentages of component requirements to either higher-level or lower-level requirements. For example, an estimated 76 percent (with a 95 percent degree of confidence of being between 64 and 86 percent) of the component requirements that we randomly sampled could not be traced to the system requirements and then to the operational requirements. In addition, an estimated 20 percent (with a 95 percent degree of confidence of being between 11 and 33 percent) of the component requirements in our sample failed to trace to a verification method. Without ensuring that requirements are fully traceable, the program office does not have a sufficient basis for knowing that the scope of the contractor’s design, development, and testing efforts will produce a system solution that meets operational needs and performs as intended. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. This includes having an overarching test plan or strategy and testing individual system components to ensure that they satisfy requirements prior to integrating them into the overall system. This test management plan should define the schedule of high- level test activities in sufficient detail to allow for more detailed test planning and execution to occur, define metrics to track test progress and report and address results, and define the roles and responsibilities of the various groups responsible for different levels of testing. However, the SBInet program office is not effectively managing its testing activities. Specifically, the SBInet Test and Evaluation Master Plan, which documents the program’s test strategy and is being used to manage system testing, has yet to be approved by the SBInet Acting Program Manager, even though testing activities began in June 2008. Moreover, the plan is not complete. In particular, it does not (1) contain an accurate and up-to-date test schedule, (2) identify any metrics for measuring testing progress, and (3) clearly define and completely describe the roles and responsibilities of various entities that are involved in system testing. Further, the SBInet System Program Office has not performed individual component testing as part of integration testing. As of July 2008, agency officials reported that component-level tests had not been completed and were not scheduled to occur. Instead, officials stated that Block 1 components were evaluated based on what they described as “informal tests” (i.e., contractor observations of cameras and radar suites in operation at a National Guard facility in the Tucson Sector) and stated that the contractors’ self-certification that the components meet functional and performance requirements was acceptable. Program officials acknowledged that this approach did not verify whether the individual components in fact met requirements. Without effectively managing testing activities, the chances of SBInet testing being effectively performed is reduced, which in turn increases the risk that the delivered and deployed system will not meet operational needs and not perform as intended. In closing, I would like to stress that a fundamental aspect of successfully implementing a large IT program like SBInet is establishing program commitments, including what capabilities will be delivered and when and where they will be delivered. Only through establishing such commitments, and adequately defining the approach and processes to be used in delivering them, can DHS effectively position itself for measuring progress, ensuring accountability for results, and delivering a system solution with its promised capabilities and benefits on time and within budget constraints. For SBInet, this has not occurred to the extent that it needs to for the program to have a meaningful chance of succeeding. In particular, commitments to the timing and scope of system capabilities remain unclear and continue to change, with the program committing to far fewer capabilities than originally envisioned. Further, how the SBInet system solution is to be delivered has been equally unclear and inadequately defined. Moreover, while the program office has defined key practices for developing and managing requirements, these practices were developed after several important requirements activities were performed. In addition, efforts performed to date to test whether the system meets requirements and functions as intended have been limited. Collectively, these limitations increase the risk that the delivered system solution will not meet user needs and operational requirements and will not perform as intended. In turn, the chances are increased that the system will require expensive and time-consuming rework. In light of these circumstances and risks surrounding SBInet, our soon to be issued report contains eight recommendations to the department aimed at reassessing its approach to and plans for the program—including its associated exposure to cost, schedule, and performance risks—and disclosing these risks and alternative courses of action for addressing them to DHS and congressional decision makers. The recommendations also provide for correcting the weaknesses surrounding the program’s unclear and constantly changing commitments and its life cycle management approach and processes, as well as implementing key requirements development and management and testing practices. While implementing these recommendations will not guarantee a successful program, it will minimize the program’s exposure to risk and thus the likelihood that it will fall short of expectations. For SBInet, living up to expectations is important because the program is a large, complex, and integral component of DHS’s border security and immigration control strategy. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information, please contact Randolph C. Hite at (202) 512-3439 or at hiter@gao.gov. Other key contributors to this testimony were Carl Barden, Deborah Davis, Neil Doherty, Lee McCracken, Jamelyn Payan, Karl Seifert, Sushmita Srikanth, Karen Talley, and Merry Woo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security's (DHS) Secure Border Initiative (SBI) is a multiyear, multibillion-dollar program to secure the nation's borders through, among other things, new technology, increased staffing, and new fencing and barriers. The technology component of SBI, which is known as SBInet, involves the acquisition, development, integration, and deployment of surveillance systems and command, control, communications, and intelligence technologies. GAO was asked to testify on its draft report, which assesses DHS's efforts to (1) define the scope, timing, and life cycle management approach for planned SBInet capabilities and (2) manage SBInet requirements and testing activities. In preparing the draft report, GAO reviewed key program documentation, including guidance, plans, and requirements and testing documentation; interviewed program officials; analyzed a random probability sample of system requirements; and observed operations of the initial SBInet project. Important aspects of SBInet remain ambiguous and in a continued state of flux, making it unclear and uncertain what technology capabilities will be delivered and when, where, and how they will be delivered. For example, the scope and timing of planned SBInet deployments and capabilities have continued to be delayed without becoming more specific. Further, the program office does not have an approved integrated master schedule to guide the execution of the program, and the nature and timing of planned activities has continued to change. This schedule-related risk is exacerbated by the continuous change in, and the absence of a clear definition of, the approach that is being used to define, develop, acquire, test, and deploy SBInet. SBInet requirements have not been effectively defined and managed. While the program office recently issued guidance that is consistent with recognized leading practices, this guidance was not finalized until February 2008, and thus was not used in performing a number of important requirements-related activities. In the absence of this guidance, the program's efforts have been mixed. For example, while the program has taken steps to include users in developing high-level requirements, several requirements definition and management limitations exist. These include a lack of proper alignment (i.e., traceability) among the different levels of requirements, as evidenced by GAO's analysis of a random probability sample of requirements, which revealed large percentages that were not traceable backward to higher level requirements, or forward to more detailed system design specifications and verification methods. SBInet testing has also not been effectively managed. While a test management strategy was drafted in May 2008, it has not been finalized and approved, and it does not contain, among other things, a high-level master schedule of SBInet test activities, metrics for measuring testing progress, and a clear definition of testing roles and responsibilities. Further, the program office has not tested the individual system components to be deployed to the initial deployment locations, even though the contractor initiated testing of these components with other system components and subsystems in June 2008. In light of these circumstances, our soon to be issued report contains eight recommendations to the department aimed at reassessing its approach to and plans for the program, including its associated exposure to cost, schedule and performance risks, and disclosing these risks and alternative courses of action to DHS and congressional decision makers. The recommendations also provide for correcting the weaknesses surrounding the program's unclear and constantly changing commitments and its life cycle management approach and processes, as well as implementing key requirements development and management and testing practices.
Medical imaging services, grouped into six major modalities, use different types of imaging equipment and media for creating an image. Physicians bill for providing these services under the Medicare physician fee schedule, which, for payment purposes, divides an imaging service into two components: the technical component, which pays for the performance of the imaging examination, and the professional component, which pays for the physician’s interpretation of the image. Recently, CMS implemented two payment changes in 2006 and 2007 that reduce physician payments for certain imaging services. Medical imaging is a noninvasive process used to obtain pictures of the internal anatomy or function of the anatomy using one of many different types of imaging equipment and media for creating the image. Imaging tests fall into six modalities: CT, MRI, nuclear medicine, ultrasound, X-ray and other standard imaging, and procedures that use imaging. Depending on the service, imaging equipment uses radiation, sound waves, or magnets to create images. X-rays and other standard imaging services, CT, and certain nuclear medicine services, such as positron emission tomography (PET), use radiation; ultrasound uses sound waves; MRI uses magnets and radio waves. For certain X-rays, CTs, and MRIs, contrast agents, such as barium or iodine solutions, are administered to patients orally or intravenously. By using contrast, sometimes referred to as “dye,” as part of the imaging examination, physicians can view soft tissue and organ function more clearly. Table 1 provides further details on each imaging modality. Imaging equipment using radiation poses more potential risk to patients than other imaging mediums. The amount of radiation patients are exposed to varies based on whether the image is obtained by X-ray or CT. CTs emit the largest amount of radiation, but estimates of the radiation dose—or the amount of radiation absorbed—from a diagnostic CT procedure can vary by a factor of 10 or more, depending on the type of CT procedure, patient size, and the CT system and its operating technique. For example, the typical dose in a CT of the abdomen is about five times that of the head, and about eight times that of an X-ray of the spine. Medicare generally covers medically necessary services provided by physicians operating within the scope of practice allowed by their state licensure, without regard to their specialty or specific qualifications. All diagnostic tests are required to be provided under at least general physician supervision—that is, a physician is responsible for the training of the technical staff performing the test, and the maintenance of the necessary equipment and supplies. Medicare’s physician fee schedule in 2006 included more than 7,000 services—together with their corresponding payment rates. About 900 of these services are associated with imaging. Each imaging service on the fee schedule has three relative value units (RVU), which correspond to the three components of physician payment: (1) physician work—the financial value of physicians’ time, skill, and effort that are associated with providing the service, (2) practice expense—the costs incurred by physicians in employing office staff, renting office space, and buying supplies and equipment, and (3) malpractice expense—the premiums paid by physicians for professional liability insurance. Each RVU measures the relative costliness of providing a particular service. For example, in 2006, the three RVUs for performing and interpreting a standard chest X-ray summed to .74. In contrast, the RVUs for CT of the head/brain without dye summed to 6.15, indicating that this service, on average nationally, consumed more than eight times more resources than the standard chest X-ray. To determine Medicare payment for a particular service, the sum of the RVUs is multiplied by a conversion factor, which is a dollar amount that translates each service’s RVUs into a payment rate. For example, in 2006, Medicare paid $233, on average nationally, for physicians performing and interpreting a CT of the head/brain without dye (6.15 multiplied by a conversion factor of $37.8975). Some items paid under the physician fee schedule that are used in the provision of imaging services—such as radiopharmaceuticals—do not have RVUs associated with them. Instead, these items are priced locally by Medicare’s Part B contractors and billed separately from the imaging services paid for under the Medicare physician fee schedule. Physicians under the Medicare physician fee schedule can be paid for performing the imaging examination—the technical component—and interpreting the image examination—the professional component. The payment for the technical component is intended to cover the cost of the equipment, supplies, and nonphysician staff and is generally significantly higher than the payment for the professional component, which is intended to cover the physician’s time in interpreting the image and writing a report on the findings. Medicare allows physicians to bill for these services separately because performing and interpreting the examination could be done by different physicians and in different settings. If the same physician performs and interprets the examination, the physician can submit a global bill to Medicare. The same rules apply under the physician fee schedule if the imaging services are completed by radiologists in independent diagnostic testing facilities (IDTF)—facilities that are independent of a hospital and physician office or “free-standing” and only provide outpatient diagnostic services. When the imaging examination is performed in an institutional setting, such as a hospital or skilled nursing facility, the physician can bill Medicare only for the professional component, while payment for the technical component is covered under a different Medicare payment system, according to the setting in which the service is provided. For example, the technical component of an imaging examination in a hospital inpatient setting is bundled into a facility payment paid under Medicare Part A, whereas the technical component of an examination in a hospital outpatient department is paid under Medicare’s hospital outpatient payment system, which is financed through Part B. In recent years, CMS has implemented two payment changes to the way Medicare pays for imaging services under the physician fee schedule. Starting January 1, 2006, CMS reduced physician payments when multiple images are taken on contiguous body parts during the same visit. CMS adopted a recommendation made by MedPAC in 2005 as a way to ensure that fee schedule payments took into account efficiencies, such as savings from technical preparation and supplies, which occur when multiple imaging services are furnished sequentially. Physicians receive the full fee for the highest paid imaging service in a visit, but fees for additional imaging services are reduced by 25 percent. The reduction is applied only to the technical component. Beginning January 1, 2007, CMS implemented two provisions in the DRA: it (1) established a cap on the physician fee schedule payments for certain imaging services at the payment levels established in Medicare’s OPPS and (2) in certain cases, eliminated the Medicare budget neutrality requirement, which is designed to ensure that the result of specific payment changes neither increase nor decrease the total amount of Medicare payments to physicians beyond a specified amount. The first provision, in practice, requires that payment for the technical component of an image in the physician office does not exceed what Medicare pays for the technical component of the same service performed in a hospital outpatient department. For example, in 2006, Medicare paid $903 under the physician fee schedule for an MRI of the brain, yet paid $506 for the same test under OPPS. Under the DRA payment change, in 2007, Medicare paid the lesser amount for this examination, regardless of whether it was performed in a hospital outpatient department or in a physician’s office. The second provision, excluding the two imaging payment reductions from the calculation of budget neutrality, results in Medicare savings as a practical matter. Savings attributed to the 25 percent multiple payment reduction and the capping of certain payments at the OPPS levels are not offset by increases for other services under the physician fee schedule. From 2000 through 2006, Medicare spending on imaging services paid for under the Part B physician fee schedule more than doubled. About 80 percent of the spending growth was associated with growth in the volume and complexity of imaging services. Compared with 2000, in 2006 more beneficiaries obtained imaging services, and average use per beneficiary also increased. Medicare spending on imaging services paid for under the Part B physician fee schedule more than doubled from 2000 through 2006, increasing to about $14 billion. (See fig. 1.) This increase represents a growth rate of 13 percent a year on average, compared to 8.2 percent for all Medicare physician-billed services during that period. Although spending increased each year since 2000, the rate of growth slowed in 2006. In that year, CMS implemented a payment change for imaging that reduced physician fees by 25 percent for additional imaging services involving contiguous body parts imaged during the same session. (See app. II for total expenditures for imaging services paid for under the physician fee schedule and expenditures by imaging modality for each year from 2000 through 2006.) Advanced imaging services—CT, MRI, and nuclear medicine—saw the highest growth rates. Spending on these advanced imaging modalities increased almost twice as fast, at an average annual rate of 17 percent, as spending on services in the three other imaging modalities—ultrasounds, standard imaging (mostly X-rays), and procedures that use imaging. The faster-growing advanced imaging services are more complex and therefore more costly. Medicare pays physicians more for both the technical component and the professional component for these services, on average, than it pays for other imaging services. (See table 2.) The payment is higher, in part, because advanced imaging equipment is more costly to obtain and requires more skilled technicians to operate. For example, in 2006, Medicare paid $1,118 for the most commonly physician- billed MRI imaging test—an “MRI brain without and with dye”—of which $995 was for performing the examination. In contrast, Medicare paid $28 for the most commonly performed standard imaging service, a chest X-ray. As a result of faster growth in the more expensive services, advanced imaging accounted for 54 percent of total imaging expenditures, up from 43 percent in 2000. In dollar terms, spending on advanced imaging increased from about $3 billion to about $7.6 billion, with spending on MRI services accounting for nearly half of this increase. In contrast, spending on ultrasounds, standard imaging (mostly X-rays), and procedures that use imaging grew more slowly, from about $4 billion to about $6.5 billion. Overall, 77 percent of Medicare’s spending from 2000 through 2006 on imaging services paid for under the physician fee schedule was associated with the growth in volume and complexity of imaging services (as measured by growth in RVUs) rather than other factors. Compared with 2000, in 2006 more beneficiaries obtained imaging services and average use per beneficiary also increased. The proportion of Medicare beneficiaries receiving at least one imaging service increased from 63 percent to 66 percent during this period. Moreover, beneficiaries’ average annual use of imaging services from 2000 through 2006 increased about 25 percent, from 5.6 to 7 imaging services, for those who received at least one imaging service. More complex advanced imaging modalities generally showed the fastest growth. For the same period, the proportion of beneficiaries using CT scans increased 39 percent, and use of CT scans on a per beneficiary basis increased 22 percent. (See app. III for beneficiaries’ use of imaging services for 2000 compared with 2006.) Several factors account for the rest of the growth in Medicare spending for imaging services. Growth in ancillary items, such as radiopharmaceuticals, which are required to provide certain imaging tests, represents 7 percent of the spending growth. Physicians bill separately for these items. Growth in the number of beneficiaries and changes in Medicare’s physician fees from 2000 through 2006 account for another 16 percent of the spending growth (see fig. 2). Contrasting explanations have been offered for why imaging use and use of advanced imaging services, in particular, have grown rapidly during this period. In interviews with physician specialty organizations that use imaging services, representatives cited the following as contributors to imaging growth: technological innovation (such as equipment becoming smaller and more portable), patient demand influenced by direct-to- consumer advertising, defensive medicine to protect physicians from malpractice suits, and an increase in clinical applications. Representatives from physician specialty organizations also stated that older invasive diagnostic procedures are being replaced in some cases with new less invasive imaging procedures that are less costly, reduce patients’ discomfort, and reduce patients’ recovery time. While representatives from private health plans and the companies they contract with specifically to manage imaging services concurred that some of these factors were key contributors to growth, they cited two other factors for the growth in spending. First, they noted that the ability of physicians to refer patients to their own practices for imaging was a major spending driver. Second, they noted that primary care physicians often lacked knowledge about the most appropriate test to order for a patient, and therefore tended to order a significant portion of imaging tests that would be considered unnecessary based on clinical guidelines. From our analysis of data from the 6-year period, we observed several trends regarding spending growth and the provision of imaging services in physician offices. First, a larger share of Medicare Part B spending for imaging services has shifted from the hospital settings—where the institution receives payment for the technical component of the service— to physician offices, where physicians receive payment for both the technical and professional components of the service. Second, consistent with this shift, physicians who provided in-office imaging services obtained an increasing share of their Medicare Part B revenue from imaging services. Third, in-office imaging spending per beneficiary varied substantially across geographic regions of the country, suggesting that not all the spending was necessary or appropriate. These trends raise concerns about whether Medicare’s physician payment policies contain financial incentives for physicians to overuse imaging services. In addition, the increased provision of imaging services in physician offices may have implications for quality. We estimate that about one-tenth of the growth in Part B spending on imaging from 2000 through 2006 resulted from this shift in settings. From 2000 through 2006, spending on imaging increased in both treatment settings. However, spending in physicians’ offices grew twice as fast—at an average annual rate of 14 percent—compared with spending in the hospital setting which grew at an average annual rate of 7 percent. During the period from 2000 through 2006, radiologists accounted for a declining share of in-office imaging spending—36 percent in 2000 compared to 32 percent in 2006. Physicians in specialties other than radiology accounted for an increasing share of in-office imaging— 64 percent in 2000 compared to 68 percent in 2006. Cardiologists’ spending on imaging services represented the largest share of in-office imaging spending of physician specialties other than radiology, growing from about $1.2 billion to about $3.0 billion—29 percent in 2000 compared to 35 percent in 2006. An array of physician specialties—including primary care, orthopedics, and vascular surgery—accounted for the remainder of in-office spending. The growth in spending by physicians in specialties other than radiology is partly due to an increasing proportion of these physicians billing for in- office services. While still small, this proportion has grown rapidly—more than doubling from 2000 to 2006 (from 2.9 to 6.3 per 100 physicians), and is much higher for certain specialties, such as cardiology. For example, the proportion of cardiologists who billed for advanced in-office services nearly doubled between 2000 and 2006, rising from about 24 per 100 physicians to about 43 per 100 physicians. Although physicians generally are prohibited from referring Medicare beneficiaries for imaging services to an entity with which the physician has a financial relationship, there is an “in-office ancillary exception.” Under this exception physicians may be paid by Medicare, for example, if the services are provided by the referring physicians in the same building where the physicians provide other services unrelated to the furnishing of imaging services. MedPAC and others have reported on the recent emergence of leased or other shared arrangements whereby “in-office” imaging services are actually delivered at another site.21, 22 For example, physicians may rent an imaging center’s services (employees and machinery) for a specific day of the week and refer their patients to that center on that day. The referring physician bills Medicare for providing the test, in turn paying the provider or center that actually performed the test a lower fee. In other instances, physicians may purchase imaging equipment which is then leased to an imaging center. In this case, the physician refers patients to the imaging center which bills for the service and then pays the physician a fee. MedPAC has expressed concerns that such arrangements create financial incentives that could influence physicians’ clinical judgment, leading to unnecessary services. Statement of Glenn M. Hackbarth, J.D., Chairman of MedPAC, at a hearing entitled: “Use of Imaging Services: Providing Appropriate Care for Medicare Beneficiaries,” on July 18, 2006, for the Subcommittee on Health, Committee on Energy and Commerce, House of Representative, 109th Cong. A recent study of imaging providers in California estimated that about 60 percent of providers billing for in-office imaging did not actually own the imaging equipment, but were involved in leasing or other arrangements designed to take advantage of the in-office ancillary exemption. Jean M. Mitchell, “The Prevalence of Physician Self-Referral Arrangements After Stark II: Evidence from Advanced Diagnostic Imaging,” Health Affairs, Web exclusive (Apr. 17, 2007). Consistent with these trends, physicians in specialties other than radiology who billed Medicare for in-office imaging services obtained an increasing share of their Medicare revenue from imaging services from 2000 to 2006. For example, cardiologists’ share of Medicare revenue attributable to in- office imaging services increased from about one-quarter in 2000 to over one-third in 2006 (see fig. 4). During this period, vascular surgeons also saw a large increase—from 10 percent to about 19 percent—in the share of their Medicare revenue generated from in–office imaging services. The same trend was evident for orthopedic surgeons, primary care physicians, and urologists. Substantial variation in imaging use across geographic regions of the country suggests that not all utilization of in-office imaging services may be appropriate. We found that per beneficiary spending on imaging services provided in physician offices varied almost eight-fold across the states in 2006—from $62 in Vermont to $472 in Florida (see fig. 5). Physician spending on in-office imaging was the highest in the South, Northeast, and in certain states in the West. Given the magnitude of the differences in imaging use across geographic areas, variation is more likely due to differences in physician practice patterns rather than patient health status. Further concerns about the appropriateness of imaging use are raised by research on geographic variation showing that, in general, more health care services do not necessarily lead to improved outcomes. The shift in imaging services to physician offices has the potential to encourage overuse, given physicians’ financial incentives to supplement relatively lower professional fees for interpretation of imaging tests with relatively higher fees for performance of the tests. Physician ownership of imaging equipment can generate additional revenue for a practice, even after taking into account the high costs of purchasing advanced imaging equipment. MedPAC has expressed concern about whether Medicare’s payment methodology overpays physicians for imaging equipment, because of outdated estimates of equipment use. An analysis published in 2005 of private insurance claims data on X-ray services concluded that orthopedists, podiatrists, and rheumatologists were two to three times more likely to order imaging services if the ordering physician also performed the examination, compared with those who referred patients to a radiologist. In addition, the authors found that podiatrists and rheumatologists were also more likely to order more intensive tests. Another study showed that physicians who refer patients for imaging in their own office are at least 1.7 to 7.7 times more likely to order imaging than those physicians in the same specialty who do not self-refer. In addition to concerns about incentives for inappropriate use of imaging services, the shifting of services from hospital and other institutional settings to physician offices may have implications for quality. Hospitals must comply with Medicare’s “conditions of participation” rules, which include general standards for imaging equipment and facilities, staff qualifications, patient safety, record-keeping, and proper handling of radioactive materials. In contrast, no comprehensive national standards exist for services delivered in physician offices other than a requirement that imaging services are to be provided under at least general physician supervision—that is, a physician is responsible for the training of the technical staff performing the imaging service, and the maintenance of the necessary equipment and supplies. CMS, however, has expanded existing quality and business performance standards for IDTFs. For example, CMS has explicitly prohibited hotels and motels from being considered appropriate sites for an IDTF setting. Regulatory responsibilities relating to imaging devices and services are divided among federal agencies as well as the states. The Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC) each have regulatory responsibilities for devices that are used to provide imaging services. For example, FDA is responsible for establishing quality standards for mammography equipment, and ensuring that manufacturers of radiation-emitting imaging equipment are in compliance with applicable performance standards. While FDA does not regulate the practice of medicine, such as the establishment of patient radiation dose limits, it is responsible for ensuring that medical imaging systems are safe and effective. NRC does not regulate medical products, but does oversee the medical uses of nuclear materials used by physicians, hospitals, and others through licensing, inspection, and enforcement programs. Regarding licensing, in many cases NRC has transferred this authority to the states. While all states have radiation control boards that monitor the use of radiation by imaging facilities, they do not regulate nonradiation imaging such as MRI or ultrasound, nor do they monitor the quality of imaging. Their primary mission is to ensure patient safety. In addition, officials from the Conference of Radiation Control Program Directors, Inc.—whose primary membership is made up of radiation professionals in state and local government who regulate the use of radiation sources—told us that states vary in the comprehensiveness of their rules as well as their ability to monitor compliance, often lacking the resources to perform all of their functions. Further, officials from the American Society of Radiologic Technologists told us that states also vary in their licensure requirements for imaging providers—some do not have any licensure or certification laws for radiology technologists, and most states also allow technicians to perform advanced imaging without additional training. In a 2007 report we recommended that CMS require sonographers—technologists that perform ultrasound examinations—paid by Medicare to be credentialed or work at accredited facilities. Although physicians can seek to have their facility accredited—a process by which facilities and providers are recognized as meeting certain quality, safety, and performance thresholds by one of the three primary accreditation organizations for imaging—officials we interviewed from these organizations estimated that very few physician offices are accredited. Studies of the provision of imaging tests in this setting showed quality concerns in several areas such as staff credentials, poor image quality, failure to monitor radiation exposure, and inadequately maintained equipment. Officials from some of the health plans, accreditation organizations, and other industry groups that we interviewed indicated similar concerns. For example, a health plan official told us that 25 percent of facilities in its network, including physician offices, failed credentialing, most commonly because of a lack of a board certified radiologist on staff, or problems with imaging equipment. Two of the three primary accreditation organizations told us that general problems encountered during the accreditation process of facilities, including physician offices, related to failure of staff to keep up with professional education requirements, lack of documentation of quality assurance policies, poor quality of the images, and incomplete or inadequate interpretation. The third accreditation organization told us that the failure rate for initial applications was about half, although the majority of reapplicants passed after correcting deficiencies. Typically, the main deficiency was equipment that needed to be recalibrated, and a lack of quality control programs. The officials from this organization were concerned about the implications for quality of the vast majority of providers who did not apply for accreditation, given a 50 percent initial failure rate for providers self-selecting to apply for accreditation. Similar to Medicare, private health plans in recent years have experienced rapid growth in imaging services, particularly in advanced imaging. We examined a sample of 17 private health care plans which were selected because they were known to take steps to actively manage imaging services. Most of the plans in our study contracted with companies called radiology benefits managers (RBM) to perform imaging management activities on their behalf. Officials of the plans or the RBMs they use told us that prior authorization, which requires physicians to obtain some form of plan approval before ordering a service, was the practice most important to managing their physicians’ use of imaging services. Other practices they noted included privileging, by which a plan limits its approval for ordering certain imaging services to physicians in certain specialties, and profiling, which entails a statistical analysis of medical claims data measuring an individual physician’s use of services relative to a desired benchmark. With respect to managing the growth in Medicare physician expenditures on imaging services, CMS does not employ the practices used by the plans in our study. The agency’s focus is largely on physician billing practices, and its management activities therefore occur at a point when services have already been ordered and performed. CMS conducts profiling activities, but these are consistent with the agency’s focus on identifying improper billing rather than on targeting services showing high spending growth rates. CMS officials indicated that approaches such as prior authorization would likely require significant administrative resources, and that the agency would have to consider any specific initiatives in light of its existing legal authority. All the health plans in our study used prior authorization, the practice of determining whether to grant physicians approval to order some or all imaging services before they are delivered, to manage spending on imaging services. This practice was in addition to retrospective payment safeguards commonly used to identify medical claims that do not meet certain billing criteria. Under prior authorization, plans only pay physicians for imaging services rendered that have received plan approval. Almost all of the plans—16 of 17—conducted their prior authorization activities through an RBM. The steps plans typically use in the prior authorization process are shown in figure 6. For example, prior authorization is typically used by RBMs for physicians requesting imaging services for lower back pain, a common condition for which physicians inappropriately request MRIs. Typically, the process works as follows: A physician requests an MRI of the lumbar spine with contrast for a patient with symptoms of lower back pain and no other symptoms. In considering this request, the RBM’s nurse manager follows a protocol of questions based on the ACR clinical guidelines for “acute low back pain, uncomplicated.” Such questions could include “How long has the patient had symptoms? Have you tried conservative management?” These questions are aimed at discouraging the use of advanced imaging at the condition’s onset, unless certain other symptoms or conditions are present. The physician has the option of consulting with one of the RBM’s board-certified radiologists or its medical director if there is disagreement with the initial decision to deny a request. If the physician still disagrees with the decision and proceeds with the request, the RBM will likely deny it. Alternatively, if the physician’s request for an MRI of the lumbar spine with contrast is made for a patient with low back pain and the other specified symptoms or conditions, the RBM waives conservative management and approves the request. The plans in our study varied in their prior authorization policies. For example, officials we interviewed from almost all of the plans reported that they targeted prior authorization for technologically complex or high- cost imaging tests, but varied in what specific tests were included under their programs. In addition, to determine the appropriateness of a given diagnostic test or procedure, most plans relied on criteria developed by the American College of Cardiology or the ACR, but they also customized these criteria to their specifications. Three of the plans used a variant of prior authorization, called prior notification, which requires the physician to contact the plan prior to sending a patient for an imaging scan. If the plan determines that another test is more appropriate, based on clinical guidelines or other criteria, the plan can make this suggestion to the physician, but the physician has ultimate discretion to choose among options. Plan officials reported significant decreases in utilization after implementing a prior authorization program. For example, several of the plan officials we interviewed reported that annual growth rates were reduced to less than 5 percent after prior authorization; these annual growth rates had ranged for these plans from 10 percent to more than 20 percent before prior authorization programs were implemented. The biggest utilization decreases occurred immediately after implementation. One plan’s medical director said that prior authorization was the plan’s most effective utilization control measure, because it requires physicians to attest to the value of ordering a particular service based on clinical need. Plan officials noted that there were costs associated with implementing a prior authorization program. Under a typical arrangement, plans paid a per-member per-month fee to an RBM to conduct prior authorization on their behalf. The plan and RBM officials we spoke with indicated that outright denial rates for requests to order imaging services were low, primarily because requesting physicians typically agree to a more clinically appropriate test or decide to forgo the test after they are shown countervailing evidence. These officials also contended that a spillover effect exists with respect to future ordering. That is, the interaction between plans and physicians that occurs during the prior authorization process enables physicians to make more educated decisions about what services to order for future patients with the same condition. The net effect has been to reduce unnecessary utilization to levels that are lower than they would have been in the absence of prior authorization. An official at one plan told us about the plan’s experience using RBM- performed prior authorization. To control rapid spending growth, the plan contracted with an RBM in the late 1990s to perform prior authorization for advanced imaging services. After 3 years, when expenditures for these services stopped growing, the plan discontinued using the RBM for prior authorization, assuming that a lasting change had been achieved in physicians’ ordering of the services. However, over the subsequent 3 years, annual growth in imaging services climbed to more than 10 percent, on average. In 2006, the plan reinstated the RBM’s prior authorization program and 6 months after implementation, growth had again declined to single digits. To a lesser extent, plans in our study used privileging and profiling to manage utilization and spending on health care services in general, which include imaging services in particular. Over one-third of the plans used privileging, a practice which limits, according to specialty, a plan’s pool of physicians who can order certain imaging services. For example, one plan in our study allowed orthopedic surgeons to perform CT scans of body joints, but did not allow endocrinologists to perform these scans. One of the RBMs we interviewed permitted ear, nose, and throat physicians to perform CT scans of the sinuses, head, or neck but none below the neck. Plan and RBM officials told us that their privileging rules were based on established medical practice guidelines and research and that physicians received advance notice of the plan’s privileging rules—that is, which specialties were permitted to perform specific services. Plans enforced adherence to these rules through their claims adjudication systems: if a physician was not privileged to order or perform a specific imaging service, the plans would not pay for the images taken or interpreted. Typically, radiologists were allowed to perform all imaging services because of their imaging-specific education and training. Profiling is a practice that is carried out through a statistical analysis of paid claims. Eight of the plans in our study used profiling to collect information about individual physicians’ ordering history and provision of imaging services. Using this information, the plans compare a physician’s practice patterns against a benchmark, or norm, based on the practice patterns of the plan’s other physicians in the same specialty. Typically, the plans inform physicians of their relative performance based on these profiling analysis results and provide additional education to physicians who order inappropriately or order at rates higher than their peers. An official at one RBM we interviewed noted that in addition to the contemporary peer comparisons, the firm’s profiling activities include longitudinal analyses to determine if a physician’s ordering of services has increased over time relative to the physician’s peers regionally and nationally. The official noted that after implementing its profiling program, the RBM observed a reduction in the number of images ordered by physicians who provide high-technology imaging in their own offices. Prior to profiling, these physicians provided three to five times more imaging services than their counterparts who referred the imaging services to other practitioners or facilities. Unlike the private plans in our study, CMS’s management practices are not oriented toward controlling spending prospectively—that is, through preapproval practices, such as prior authorization and privileging. Instead, CMS employs, through its claims administration contractors, an array of retrospective payment safeguards, or activities, that occur in the post- delivery phase of monitoring services. These activities are designed to achieve payment accuracy; in fact, CMS evaluates contractors’ performance in terms of a payment error rate. In general, the contractors responsible for administering Part B payments are required to perform ongoing data analyses and take action on the services or physicians that present the greatest risk of improper payments. The contractors use various techniques, such as profiling, to examine unexplained increases in utilization, abnormally high utilization of services by an individual physician relative to the physician’s peers, and other indicators of aberrancies. Some of the analyses result in recovering overpayments from individual physicians who have been found to bill the program inappropriately. They have also resulted in producing the evidence needed to modify coverage or payment policies at the local contractor level—referred to as a local coverage determination. For example, with respect to imaging services, one contractor that had conducted reviews of echocardiograms, nuclear medicine, and PET and CT scans, modified its coverage policies for these services by limiting the number of times the services could be billed for an individual patient within a certain time frame. In a 2007 report, we concluded that CMS’s existing physician profiling and educational outreach activities, while focused largely on improper billing practices and potential fraud, put the agency in a favorable position to adopt profiling as a strategy to curb inappropriate spending resulting from physicians’ inefficient practices. As with the private plans we reviewed for this study and the health care payers in our 2007 study, a consequence of profiling for efficiency could be to achieve physician compliance with clinical practice standards and, in doing so, reduce inappropriate ordering and use of services. In response to our recommendation to adopt an efficiency-oriented profiling program, CMS commented that this program fit into efforts the agency was pursuing to improve quality and efficiency in Medicare. To that end, CMS has contracted with a firm to develop efficiency measures for certain anatomically-specific imaging services with an anticipated completion date of December 2008. These measures are to be based on clinical evidence and are designed to provide the agency, in the firm’s words, “the ability to more effectively manage the rapid diffusion of new technologies and patient-driven demand.” The firm plans to test these measures and provide insight into their development and use. In the case of lumbar MRI, for example, the plan is to track physicians’ behavior with respect to the conventionally accepted use of this service—namely, that the service is not typically indicated unless the patient has received a period of conservative therapy. Using a coding system, the firm will track whether the physician (1) provided documentation that the patient had a trial of conservative therapy prior to the MRI, (2) provided no documentation or conservative therapy prior to the MRI, or (3) documented that the patient did not require conservative therapy. The codes, in this instance, are intended to capture whether appropriate evidence-based guidelines were adhered to. CMS officials indicated that approaches, such as prior authorization, would likely require significant administrative resources. In addition, they stated that they were not aware of any statutory provision either explicitly authorizing or prohibiting the use of such approaches. Accordingly, they stated that if they were to pursue prior authorization, they would need to evaluate any specific initiatives in light of CMS’s overall authority with respect to the Medicare program. The rapid increase in Medicare spending on imaging services paid for under the physician fee schedule from 2000 to 2006 poses challenges for CMS in managing the spending growth on these services. While much of this growth may be appropriate, the pace of increase and shift towards more costly advanced imaging; a shift towards providing imaging in physician offices, where there is generally less oversight; broader use of imaging by physician specialties other than radiologists; and the substantial variation of in-office imaging spending per beneficiary across geographic regions of the country raise concerns. Our examination of private plans—selected because they were known to take steps to actively manage imaging services—provides examples of practices to constrain spending growth. Unlike CMS, the private plans in our study had management practices oriented toward controlling spending prospectively rather than solely focusing on activities that occur after the imaging service has been provided to the beneficiary. Specifically, our examination of these plans found a common thread that requiring prior authorization of certain imaging services, such as advanced imaging services, was effective for them in reducing spending growth in this area. Given the pressures of a fiscally unsustainable Medicare program, CMS has undertaken several initiatives aimed at improving its performance as a purchaser of health care services. With respect to rapidly growing imaging services, the experience of the private plans in our study suggests that the benefits of front-end management of these services exceeded their costs. We believe CMS may be able to improve its prudent purchaser efforts by adopting strategies such as prior authorization and privileging. To do this, CMS would need to assess the feasibility of using these approaches for imaging services under the Medicare Part B program, including the costs or staffing resources needed to carry out these activities and the potential savings that might accrue from these activities. Moreover, CMS would also need to assess any specific activities in light of its authority under the Medicare program and determine if additional legislation is necessary. To address the rapid growth in Medicare Part B spending on imaging services, we recommend that CMS examine the feasibility of expanding its payment safeguard mechanisms by adding more front-end approaches to managing imaging services, such as privileging and prior authorization. We obtained written comments on a draft of this report from HHS (see app. V). We obtained oral comments from representatives of two organizations, AHIP and AMIC, selected because they represent a broad array of stakeholders with specific involvement in the imaging industry. HHS stated that, through ongoing data analysis and evaluation, Medicare contractors have identified imaging services as an area that poses a high risk to the Medicare Trust Fund, and are therefore continuing to conduct ongoing medical review and provider education. We are pleased that CMS contractors are scrutinizing imaging services through post-payment claims review; however, as we noted in the draft of this report, we believe that more front-end approaches to managing these services may also be desirable. Regarding our recommendation, HHS raised several concerns about the administrative burden, as well as the advisability of prior authorization for the Medicare program. First, the agency said there was no independent data—other than self-reported—on the success of RBMs in managing imaging services. Second, it stated that RBMs’ use of potentially proprietary information, including clinical guidelines and protocols for approval of services, may be inconsistent with the public nature of Medicare. Third, the effectiveness of a prior authorization program could be diminished if a high proportion of denied services were overturned through Medicare’s statutory and regulatory appeals process. HHS also raised a question about how prior authorization would fit within its current post-payment review program. Regarding the effectiveness of prior authorization and use of RBMs in the private sector, as we noted in the draft report, all the plans in our study had implemented some form of a prior authorization program, and all but one had hired an RBM to manage imaging services for its enrollees. It is unlikely that these plans—ranging in size from small FEHBP plans to nationwide private sector plans with up to 34 million covered lives— would incur RBM fees to implement prior authorization unless they believed it to be effective. As we also noted in the draft report, the use of prior authorization as a tool to manage imaging is a growing trend in the private sector. We do not dispute HHS’s reservations about prior authorization, and agree that these concerns will require careful examination within the context of Medicare statutes and regulations. Because we believe post-payment claims review alone is inadequate to manage one of the fastest growing parts of Medicare, addressing these concerns should be incorporated into CMS’s feasibility analysis of adding front-end approaches to its prudent purchasing efforts. If Medicare is to become a “value-based” purchaser of health services, for the sake of both its beneficiaries and taxpayers, it should consider going beyond its traditional methods of managing benefit payments to achieve this result. AHIP and AMIC representatives presented contrasting concerns about our discussion of prior authorization in the draft report. AHIP representatives characterized prior authorization as primarily an educational tool to persuade physicians to prescribe imaging studies in conformance with practice standards, while AMIC representatives characterized it as a cost- cutting tool that achieves savings by imposing burdens on physicians, with little or no educational benefit. Their views on the value of RBMs as implementers of prior authorization are similarly contrasting. Specifically, AHIP representatives’ primary concern was our characterization of prior authorization as a cost-control measure rather than a tool used by plans to improve quality and ensure appropriate use of imaging services by adherence to evidence-based guidelines. Officials we interviewed from plans and RBMs generally viewed prior authorization as the most effective tool to reduce inappropriate utilization and spending growth rather than to improve quality—many of the representatives described it as a utilization management tool. AHIP representatives said the draft report did not include provider consultations with radiologists as another strategy that plans employ. We have revised the report to note that providers have that option if they disagree with a plan’s initial decision to disapprove a requested imaging service. AHIP representatives also raised concerns that the draft report did not give sufficient attention to market structure incentives, such as leasing arrangements and manufacturers’ attempts to increase acquisition of imaging equipment. Our report does address the topic of incentives for inappropriate use of imaging; however a detailed analysis is beyond the scope of our work. AHIP representatives also provided technical comments, which we incorporated as appropriate. AMIC representatives raised four principal concerns about the draft report. First, they stated the draft report should have focused on strategies such as accreditation (which improves quality), and adherence to clinical practice guidelines (that result in appropriate use of imaging services), rather than private sector strategies such as use of RBMs, prior- authorization, and other techniques which focus solely on controlling costs. Specifically, AMIC representatives expressed several concerns about RBMs. They stated that the for-profit structure and lack of transparency in sharing appropriateness guidelines make RBMs incompatible with the Medicare program. They also contended that there is no evidence that RBMs improve care or add value, and RBMs involve physicians in lengthy interactions. Moreover, they stated that prior authorization had been tried and proven unfeasible for Medicare for lack of sufficient administrative resources. In the draft report, we noted plans’ increasing use of accreditation to assure quality of imaging services. With regard to prior authorization and RBMs, we are recommending that CMS consider the feasibility of these and other front-end approaches. We would also note that while HHS indicated that prior authorization might be inconsistent with the Medicare program, the department did not rule it out as a strategy that had been tried and proven unfeasible for Medicare. Second, AMIC representatives stated that in emphasizing spending growth we had failed to recognize the benefits of imaging and its effects in reducing overall health costs by substituting for more invasive procedures or treatments. We acknowledged the benefits of imaging throughout the draft report and noted that while some of this spending growth may be appropriate, financial incentives inherent in Medicare’s payment policies for potentially inappropriate use of imaging in physicians’ offices, and their implications for a fiscally unsustainable Medicare program cannot be ignored. We are not aware of any peer-reviewed studies that conclusively show the role of imaging in reducing overall health care costs. Third, AMIC representatives stated that by focusing only on Part B spending under the physician fee schedule, the draft report did not acknowledge growth in imaging across other sites of care such as hospitals. As we stated in the draft report, Medicare’s physician payment policies contain financial incentives for physicians to directly benefit from higher fees paid for the provision of imaging services in their offices, while receiving lower fees for interpretation of in hospitals. However, we have added additional information to the report, noting that about two-thirds of all imaging services were delivered in the hospital setting in 2006, and that spending on imaging services delivered in physician offices grew twice as fast compared to spending on services delivered in the hospital setting. AMIC’s fourth concern was that the draft report did not discuss the fairness of the payment reductions resulting from the changes mandated in the DRA. As noted in the draft report, we will examine the effects of payment changes mandated by the DRA in a separate report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to the Secretary of HHS, the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix VI. To determine trends in Medicare Part B spending, we analyzed Medicare claims data from 2000 through 2006 using the Part B Extract Summary System (BESS)—a data source that aggregates data to the billing code designated under the Healthcare Common Procedure Coding System (HCPCS). We extracted claims where the first digit of the Berenson-Eggers Type of Service (BETOS) code was equal to “I”, indicating the line item was an imaging service. On the basis of data from the Denominator File— a database that contains enrollment data and entitlement status for all Medicare beneficiaries enrolled and/or entitled in a given year—we excluded beneficiaries who had 12 months of enrollment in a health maintenance organization in a given year. We aggregated the 18 BETOS categories into six major categories of imaging services, also referred to as modalities: CT, MRI, nuclear medicine, ultrasound, procedures that use imaging, and X-rays and other standard imaging. Our spending totals include two parts of the imaging service paid for by Medicare: (1) the technical component—the performance of the examination itself, and the (2) professional component—the physician’s interpretation of the examination. We also examined the association between growth in total Part B imaging spending and various factors, including the growth in the volume and complexity of services, the number of Medicare fee-for-service beneficiaries, and Medicare fees for imaging services. To do this, we first calculated the growth in total Part B spending from 2000 through 2006 and then estimated the relative contribution of each factor to the growth in total Part B imaging spending. To estimate the effect of volume and intensity on the growth in total spending, we totaled the Relative Value Units (RVU) associated with each imaging service from 2000 and 2006. Because RVUs for imaging services may change from year to year, we used RVUs for the most recent year for which data were available, 2006. We estimated the effect of separately billed items used to deliver imaging services, such as radioactive agents and iodine supplies, by comparing total spending on these items in 2000 and 2006. Physicians submit separate bills for these items and are paid based on prices established by Medicare’s claims administration contractors. These services are not assigned RVUs in the physician fees schedule. We compared the number of Medicare beneficiaries from 2000 to 2006 to determine the effect of their growth and compared changes in Medicare fees for imaging services using the Medicare conversion factor in 2000 compared with 2006. To determine the share of Medicare beneficiaries who received any imaging services and, for those beneficiaries, the average number of services provided, we used Medicare Part B Physician/Supplier Claims data for 2000 and 2006 and Denominator File data for those same years. To supplement our quantitative examination of spending trends and to understand stakeholder perspectives on these trends, we obtained information from 19 physician specialty groups, including the American College of Cardiology and the American College of Radiology. These 19 specialties were chosen because imaging is integral to their practices. In addition, we interviewed officials from two organizations, the Access to Medical Imaging Coalition and the Medical Imaging & Technology Alliance (a division of the National Electrical Manufacturers Association), that represent a diverse and large number of stakeholders including equipment manufacturers, physician specialties, patient-advocacy organizations, and others. We also interviewed representatives from America’s Health Insurance Plans (AHIP), a trade association that includes about 90 percent of health insurers, 17 private plans, and five of the largest RBMs that manage imaging services for health plans. To examine the relationship between spending growth and the provision of imaging services in physician offices, we analyzed Medicare claims data from 2000 and 2006. We first examined the extent to which Medicare Part B spending on imaging services shifted to physician offices from Independent Diagnostic Testing Facilities (IDTF) and hospital inpatient, outpatient, and emergency room settings. To examine geographic variation in per beneficiary spending on in-office imaging, we divided total in-office spending for each state by the number of Medicare beneficiaries for that state. However, since total in-office spending may vary across states because of Medicare’s geographic price differences, we derived an adjusted spending total by multiplying the total RVUs for in-office imaging in each state by the national Medicare physician fee schedule conversion factor. For this analysis, we excluded data from Hawaii because spending per beneficiary appeared to be too low compared with other states of similar size and Medicare beneficiary population. We also examined how physicians’ share of their Medicare Part B revenue from imaging services has changed during this period and its relationship with certain physician specialties. Specifically, by physician specialty, we examined the number of non-radiologists who submitted bills that included the provision of the imaging examination, and the share of overall allowed charges that were attributable to imaging services provided in physician offices. To do this, we used Medicare Part B claims data from the National Claims History files and constructed data sets for 100 percent of Medicare claims for physician services performed by physicians in the first 28 days of April 2000 and April 2006. We established a consistent cutoff date (the last day of the year) for each year’s data file and only included those claims for April services that had been submitted by that date. Because claims continue to accrete in the data files, this step was necessary to ensure that the earlier year was not more complete than the later year. If non-radiologist physicians performed imaging examinations, either billed separately or in conjunction with an interpretation, and the place of service was “physician’s office,” then they were deemed to be performing those services in-office. We focused on non-radiology specialties that had at least 500 individual physicians who billed Medicare for any service and at least 5 percent of those billed for any imaging in the period examined, which yielded 297,000 physicians in 2000 and 353,000 in 2006. To examine the approaches used by private payers that may have lessons for Medicare in managing spending on imaging services, we selected 17 private payers known to be active in managing imaging benefits that included a combination of national and regional payers. We selected five plans because they had publicly presented information to the Congress or MedPAC on prior occasions about their imaging management practices, or had descriptions of their programs appear in the medical literature. We selected six private plans offered to federal employees under the Federal Employees Health Benefits Program (FEHBP), and six private plans identified through our interview with AHIP. Appendix IV provides characteristics of our sample of private plans. We conducted interviews with, or submitted questions to, these plans. We also interviewed five radiology benefits managers—organizations hired by private payers to manage radiology services for their enrollees—to learn about the management practices that they use to manage spending on imaging services. To determine what management practices the Centers for Medicare and Medicaid Services (CMS) uses for imaging services, we interviewed CMS officials including those from the Office of Clinical Standards and Quality, the Coverage and Analysis Group, and the Program Integrity Group, and officials from Medicare Part B contractors that together process claims for nine different states. We conducted our work from January 2007 through May 2008 in accordance with generally accepted government auditing standards. Appendix IV: Characteristics of GAO Sample of Private Plans That Actively Manage Imaging Services (February 2008) Approximate number of covered lives 1 million MA, NH, and ME Yes 438,000 MN, ND, SD, 1 million LA, KY, parts of 1.9 million PA and NJ area, IL, and IN 78,000 MO, OH, and CA Yes 34 million CA, CO, CT, GA, IL, IN, KY, MA, ME, MO, NV, NH, NY, OH, TX, VA, and WI UniCare is owned by Wellpoint. In addition to the contact name above, Jessica Farb and Thomas A. Walke, Assistant Directors; Todd Anderson; Iola D’ Souza; Hannah Fein; Julian Klazkin; Emily Loriso; and Richard Lipinski made key contributions to this report.
The Centers for Medicare & Medicaid Services (CMS)--an agency within the Department of Health and Human Services (HHS)--and the Congress, through the Deficit Reduction Act of 2005 (DRA), recently acted to constrain spending on imaging services, one of the fastest growing set of services under Medicare Part B, which covers physician and other outpatient services. GAO was asked to provide information to help the Congress evaluate imaging services in Medicare. In this report, GAO provides information on (1) trends in Medicare spending on imaging services from 2000 through 2006, (2) the relationship between spending growth and the provision of imaging services in physicians' offices, and (3) imaging management practices used by private payers that may have lessons for Medicare. To do this work, GAO analyzed Medicare claims data from 2000 through 2006, interviewed private health care plans, and reviewed health services literature. From 2000 through 2006, Medicare spending for imaging services paid for under the physician fee schedule more than doubled--increasing to about $14 billion. Spending on advanced imaging, such as CT scans, MRIs, and nuclear medicine, rose substantially faster than other imaging services such as ultrasound, X-ray, and other standard imaging. GAO's analysis of the 6-year period showed certain trends linking spending growth to the provision of imaging services in physician offices. The proportion of Medicare spending on imaging services performed in-office rose from 58 percent to 64 percent. Physicians also obtained an increasing share of their Medicare revenue from imaging services. In addition, in-office imaging spending per beneficiary varied substantially across geographic regions of the country, suggesting that not all utilization was necessary or appropriate. By 2006, in-office imaging spending per beneficiary varied almost eight-fold across the states--from $62 in Vermont to $472 in Florida. Private health care plans that GAO interviewed used certain practices to manage spending growth that may have lessons for CMS. They relied chiefly on prior authorization, which requires physicians to obtain some form of plan approval to assure coverage before ordering a service. Several plans attributed substantial drops in annual spending increases on imaging services to the use of prior authorization. In contrast, CMS employs an array of retrospective payment safeguard activities that occur in the post-delivery phase of monitoring services and are focused on identifying medical claims that do not meet certain billing criteria. The private plans' experience suggests that front-end management of these services could add to CMS's prudent purchaser efforts.
The Clean Air Act gives EPA authority to set national standards to protect human health and the environment from emissions that pollute ambient (outdoor) air. The act assigns primary responsibility for ensuring adequate air quality to the states. The pollutants regulated under the act can be grouped into two categories—“criteria” pollutants and “hazardous air” pollutants. While small in number, criteria pollutants are discharged in relatively large quantities by a variety of sources across broad regions of the country.Because of their widespread dispersion, the act requires EPA to determine national standards for these pollutants. These national standards are commonly referred to as the National Ambient Air Quality Standards (NAAQS). The NAAQS specify acceptable air pollution concentrations that should not be exceeded within a geographic area. States are required to meet these standards to control pollution and to ensure that all Americans have the same basic health and environmental protection. NAAQS are currently in place for six air pollutants: ozone, carbon monoxide, sulfur dioxide, nitrogen dioxide, lead, and particulate matter. The second category, referred to as “hazardous air pollutants” or “air toxics,” includes chemicals that cause serious health and environmental hazards. For the most part, these pollutants emanate from specific sources, such as auto paint shops, chemical factories, or incinerators. Prior to its amendment in 1990, the act required EPA to list each hazardous air pollutant that was likely to cause an increase in deaths or in serious illnesses and establish emission standards applicable to sources of the listed pollutant. By 1990, EPA had listed seven pollutants as hazardous: asbestos, beryllium, mercury, vinyl chloride, arsenic, radionuclides, and benzene. However, the agency was not able to establish emissions standards for other pollutants because EPA, industry, and environmental groups disagreed widely on the safe level of exposure to these substances. The 1990 amendments established new information gathering, storage, and reporting demands on EPA and the states. Required information ranged from that on ground-level to atmospheric pollutants. For example, states with ozone nonattainment areas must require owners or operators of stationary sources of nitrogen oxides or volatile organic compounds to submit to the state annual statements showing actual emissions of these pollutants. Also, the amendments expanded the air toxics category to include a total of 189 hazardous air pollutants that are to be controlled through technology-based emission standards, rather than health-based standards as the previous law had required. To establish technology-based standards, EPA believes that it needs to collect information on emissions of these hazardous air pollutants. In addition, the amendments initiated a national operating permit program that requires new information to be collected from sources that release large amounts of pollutants into the air. Further, the amendments require new information about acid rain, stratospheric ozone-depleting chemicals, and ecological and health problems attributed to air pollutants. Appendix I identifies titles of the act and selected additional data collection requirements imposed by the new law. EPA designed AIRS in stages during the 1980s to be a national repository of air pollution data. EPA believed that having this information would help it and the states monitor, track, and improve air quality. The system is managed by EPA’s Information Transfer and Program Integration Division in the Office of Air Quality Planning and Standards. The Office of Air Quality Planning and Standards, under the Assistant Administrator of Air and Radiation, manages the air quality program. AIRS was enhanced in response to the 1990 amendments, when additional gathering, calculating, monitoring, storing, and reporting demands were placed on the system. AIRS currently consists of four modules or subsystems: Facility Subsystem: This database, which became operational in 1990, contains emission, compliance, enforcement, and permit data on air pollution point sources that are monitored by EPA, state, and local regulatory agencies. Air Quality Subsystem: This database, which became operational in 1987, contains data on ambient air quality for criteria, air toxic, and other pollutants, as well as descriptions of each monitoring station. Area and Mobile Source Subsystem: This is a database for storing emission estimates and tracking regulatory activities for mobile air pollution sources, such as motor vehicles; small stationary pollutant emitters, such as dry cleaners; and natural sources, such as forest fires. The subsystem became operational in 1992 and is scheduled to be phased out by September 1995 due to budget cuts and low utilization. Geo-Common Subsystem: This database, which became operational in 1987, contains identification data such as code descriptions used to identify places, pollutants, and processes; populations of cities and/or counties; and numerical values that pertain to air quality standards and emission factors that are used by all the other subsystems. Information provided by EPA, which we did not independently verify, indicates that the total cost to develop and operate the system from 1984 through 1995 will be at least $52.6 million. Budgeted operating and maintenance costs for fiscal year 1996 are projected to be $2.7 million. Neither of these estimates include states’ personnel costs. The Facility Subsystem accounted for the largest portion of subsystem costs. Appendix II provides a more detailed breakdown of estimated subsystem costs for fiscal years 1984 through 1995. Budgeted subsystem costs were not available for fiscal year 1996. To determine whether EPA’s planned state emissions reporting requirements exceeded the agency’s actual program needs, we reviewed the Clean Air Act, and we analyzed various information reporting requirements of the 1990 amendments and EPA documents interpreting requirements of the amendments. We also analyzed a draft EPA emissions reporting regulation and compared its reporting requirements with an EPA emissions reporting options paper examining several alternative reporting levels. Further, we evaluated state and state air pollution association comments on the draft regulation. Finally, we reviewed other EPA emission reporting guidance documents and interviewed EPA, state, and local air pollution officials to obtain their comments on the draft regulation. EPA officials interviewed were from the Information Transfer and Program Integration Division and the Emissions, Monitoring, and Analysis Division in the Office of Air Quality Planning and Standards. State representatives interviewed were from Arizona, California, Michigan, New Hampshire, Tennessee, and Wisconsin. Local officials interviewed were from Ventura County, California, and the South Coast Air Quality District, Diamond Bar, California. To determine whether states use AIRS to monitor emissions data, we reviewed early AIRS design and development documents and examined EPA documents evaluating AIRS Facility Subsystem use by all the states. Further, we examined comments and/or analyses provided to EPA by seven states on their use of AIRS. We also evaluated original user requirements and other AIRS documents to determine the original purpose and anticipated users of AIRS. In addition, we interviewed EPA, state, and vendor information system officials on states’ use of AIRS and state information systems. Vendor representatives interviewed were from Martin Marietta Technical Services, Inc., and TRC Environmental Corporation. We performed our work at the EPA AIRS program offices in Research Triangle Park and Durham, North Carolina, and at the AIRS 7th Annual Conference in Boston, Massachusetts. Our work was performed from October 1994 through May 1995, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Administrator of the Environmental Protection Agency. In response, on June 29, 1995, we received comments from the Acting Director for the Office of Air Quality Planning and Standards. EPA’s draft regulation on states’ reporting of air pollution emissions exceeded what was needed by EPA to meet minimum agency air pollution program needs. EPA has suspended its promulgation of the regulation and has recently begun studying alternative reporting options. EPA began work on the now suspended emissions regulation in order to consolidate and standardize several state emissions reporting requirements (i.e., emission statements, periodic emission inventories, and annual statewide point source reporting) and to align these requirements with the mandates in the 1990 amendments. Draft versions of the regulation were circulated in late 1993 and early 1994 to obtain preliminary comments from several states. Three states commented to EPA on the draft regulation and one provided written comments. This state concluded that the level of detail required by the proposed regulation was not necessary. The state also noted that the draft regulation required data on each emission point within a plant, rather than aggregate data for each facility, and on items related to a factory’s process and equipment, such as process rate units, annual process throughput, and typical daily seasonal throughput. Further, this state also asserted that annual reporting of hazardous air pollution emissions, as required by the draft regulation, is not required by the amendments. The state said that because of the additional complexity of toxic air pollutant data compared to criteria pollutant data, annual reporting to AIRS would not be feasible. In addition, in a letter to EPA addressing several AIRS issues, seven states also mentioned the draft regulation. These states said that the draft regulation would require them to submit more highly detailed data items into AIRS than called for under the amendments and other EPA mandated programs. Further, these states noted that providing the additional data sought in the draft regulation concerning hazardous air pollutant emissions would require developing more complicated toxic chemical databases, which are very costly to develop. The states noted that additional resources to develop these databases were not available. EPA acknowledged these concerns and has suspended the regulation. In December 1994, EPA issued a study that stated that minimum program needs could be met with a fraction of the data that would have been required by the suspended regulation. Our analysis of the study revealed that, in one case, EPA only needed to collect about 20 percent of the volatile organic compounds data requested in the suspended regulation to meet minimum program needs. The study showed that, in this case, an estimated 1,323,540 of these data items would have to be reported by California under the draft regulation, while only 241,574 data items would be reported under the minimum program needs option. According to representatives in EPA’s Emissions, Monitoring, and Analysis Division, most other states could reduce the amount of data submitted to EPA by a similar proportion and still meet minimum program needs. (See appendix III for additional state examples). However, officials in EPA’s Office of Air Quality and Standards noted that while the reduced level of data would meet minimum program needs, other important data that the agency believes could contribute to a more effective program would not be collected. Nevertheless, collection of these additional data would place an extra burden on the states. EPA has now begun reevaluating the information it needs from states and is considering various reporting alternatives. The use of the AIRS Facility Subsystem by heavy emission states for tracking air pollution emissions is limited. When AIRS was originally designed, states were expected to be one of its primary users; however, most heavy emission states now use their own systems because these systems are more efficient and easier to use than AIRS. The Facility Subsystem is the official repository for emission inventory, regulatory compliance, and permit data. It contains annual emissions estimates for criteria pollutants and daily emissions estimates. The subsystem was developed by EPA to track, monitor, and assess state progress in achieving and maintaining national ambient air quality standards and is also used to report the status of these efforts to the Congress. It was also developed to allow state and local air pollution control agencies to monitor and track emissions and make midcourse adjustments, as necessary, to achieve air quality standards. EPA requires that states submit data to the subsystem either in an AIRS compatible format or directly to the subsystem. The states receive these data from thousands of sources around the country. For the 1990 base year inventory, over 52,000 sources reported data through the states to the AIRS Facility Subsystem. Each state is to use these data to help prepare a plan detailing what it will do to improve the air quality in areas that do not meet national standards. While all the states must input emission and other data into the Facility Subsystem, most heavy emission states do not use the subsystem internally to monitor and analyze emissions and compliance data. In many cases, these states already had their own systems to perform these functions. Each state’s system is customized to that particular state’s program data and reporting needs. Of the 10 states that account for almost half of the combined emissions of the criteria pollutants, only one (Indiana) is a direct user of the emissions portion of the subsystem. Further, of these same 10 states, only 4 (California, Georgia, Indiana, and Pennsylvania) are direct users of the compliance portion of the subsystem. By contrast, a greater proportion of the smaller emission source states use the Facility Subsystem to manage and analyze air pollution data. These states do not have their own air pollution information systems. In his comments, the Acting Director for the Office of Air Quality Planning and Standards expressed concern that the primary evidence supporting our assertion that the proposed reporting requirements exceeded EPA minimum program needs is based primarily on the written comments provided by one state. This is incorrect. Our finding is based primarily on our analysis of EPA’s December 1994 study, which also concluded that minimum program needs could be met with a fraction of the data that would have been required by the suspended regulation. The Acting Director also commented that the report did not adequately reflect EPA’s efforts to respond to the states’ concerns. We believe that the report makes clear that EPA took action and suspended the draft regulation based on state concerns. Finally, the Acting Director stated that the draft report did not reflect the success of EPA’s regulatory review process and only focused on an interim finding that EPA addressed by suspending the regulation. We believe the report adequately reflects EPA’s process and states’ concern with the additional burden that would have been imposed on them if the draft regulation had been promulgated. For example, we note in the report that EPA has recently begun studying alternative reporting options. We are sending copies of this report to the Administrator, EPA; interested congressional committees; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. Please call me at (202) 512-6253 if you or your staff have any questions concerning this report. Major contributors are listed in appendix IV. Expands several existing information collection, storage, and reporting requirements currently being met by the Aerometric Information Retrieval System (AIRS). Thousands of additional facilities in ozone nonattainment areas will be defined as “major sources” and will thus be subject to enhanced monitoring, recordkeeping, reporting, and emissions control requirements. Expands and revises emission limitations for mobile sources (automobiles and trucks) of air pollutants. New standards are established for motor vehicle engines, fuel content, alternative fueled vehicles, and other mobile sources. AIRS was not affected by these requirements. Creates a program to monitor and control the 189 hazardous air pollutants. AIRS is being enhanced to provide a tool for EPA to develop technology-based standards and, when standards have not been developed, for state pollution control agencies to make case-by-case decisions on the best demonstrated control technologies for hazardous air pollutants within an industry. Establishes a new federal program to control acid deposition. AIRS was not affected by these requirements. The separate Acid Rain Data System/Emissions Tracking System provides for recording and validating emissions data from sources emitting sulfur dioxide and nitrogen oxides, ingredients of acid rain. Establishes a new permit program that, in large part, is to be implemented by the states. AIRS is being enhanced to accommodate additional permit program data elements and to merge emissions and enforcement data. Creates a new federal program for the protection of stratospheric ozone. Each person producing, importing, or exporting certain substances that cause or contribute significantly to harmful effects on the ozone layer must report to EPA quarterly the amount of each substance produced. AIRS was not affected by this requirement. Enhances federal enforcement authority, including authority for EPA to issue field citations for minor violations. AIRS was enhanced to collect and report new data concerning administrative, field citation, and other actions. Includes various miscellaneous provisions, including provisions addressing emissions from sources on the outer continental shelf and visibility issues. AIRS was not affected by these provisions. Requires several national or regional research programs. Most of the research programs require air data that can be integrated with data from other media or from other systems. This may require system modification. Legend: n/a = not applicable. Columns and rows may not total precisely due to rounding. Allan Roberts, Assistant Director Barbara Y. House, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed selected data collection and reporting requirements of the 1990 Clean Air Act Amendments, focusing on whether: (1) the Environmental Protection Agency's (EPA) planned state emissions reporting requirements exceed its program needs; and (2) states use the EPA Aerometric Information Retrieval System (AIRS) to monitor emissions data. GAO found that: (1) EPA draft regulation would have required states to submit emissions data that exceeded its minimum air pollution program needs and to develop complicated pollutant databases that they could not afford; (2) EPA has since suspended the regulation and is considering alternative reporting options; (3) despite EPA intentions, 9 of the 10 heavy emission states use their own independently developed systems to track air pollution emissions; and (4) the state tracking systems are more efficient and easier to use than AIRS.
During the late 1960s and 1970s, Congress enacted several laws that were intended to help ensure fair and equitable access to credit for both individuals and communities. These laws included FHA in 1968, ECOA in 1974, and HMDA in 1975. ECOA and FHA constitute the federal antidiscrimination statutes applicable to lending practices and commonly are referred to as the “fair lending laws.” Although both statutes prohibit discrimination in lending, FHA antidiscrimination provisions also apply more generally to housing, such as prohibiting discrimination in the sale or rental of housing. Unlike ECOA and FHA, HMDA does not prohibit any specific activity of lenders, but it establishes data collection, reporting, and disclosure obligations for particular institutions, which are discussed below. The Federal Reserve has general rulemaking authority for ECOA and HMDA, and HUD has similar rulemaking authority for FHA. Responsibility for federal oversight and enforcement of the fair lending laws is principally shared among three enforcement agencies and five depository institution regulators (see app. II for more details). In general, with respect to the relevant fair lending law, HUD and DOJ have jurisdiction over all depository institutions and nondepository lenders, including “independent” mortgage lenders, such as mortgage finance companies, which are not affiliated with, or owned by, federally insured depository institutions such as banks, thrifts, or credit unions or owned by a federally regulated bank or savings and loan holding company. FTC has jurisdiction pursuant to ECOA over all nondepository lenders, including independent mortgage lenders, subsidiaries and affiliates of depository institutions, and nondepository subsidiaries of bank holding companies. Unlike HUD and DOJ, FTC does not have enforcement authority over federally regulated depository institutions. The following describes the fair lending enforcement responsibilities of HUD, FTC, and DOJ in more detail: Under FHA, HUD investigates all complaints filed with it alleging violations of FHA and may initiate investigations and file its own complaints, referred to as Secretary-initiated complaints, against independent mortgage lenders, or any other lender, including depository institutions that HUD believes may have violated the act. FHA requires HUD to seek conciliation between the parties to any complaint. If conciliation discussions are unsuccessful, and HUD determines after an investigation that reasonable cause exists to believe that a discriminatory housing practice has occurred, or is about to occur, HUD must issue a Charge of Discrimination against those responsible for the violation and prosecute the claim before an administrative law judge. However, after a charge has been issued, any party may elect to litigate the case instead in federal district court, in which case DOJ assumes responsibility from HUD for pursuing litigation. A HUD administrative law judge or federal judge may order lenders to change their policies, compensate borrowers affected by the violation, and take steps to prevent future violations, in addition to imposing civil penalties. FTC also may conduct investigations and file ECOA complaints against nonbank mortgage lenders or brokers—including but not limited to nonbank subsidiaries of banks and bank holding companies—that may be violating ECOA. If FTC concludes that it has reason to believe ECOA is being violated, the agency may file a lawsuit against the lender in federal court to obtain an injunction and consumer redress. If FTC deems civil penalties are appropriate, the agency may refer the case to DOJ. Alternatively, FTC may bring an administrative proceeding against the lender before the agency’s administrative law judges to obtain an order similar in effect to an injunction. DOJ, which has both ECOA and FHA authority, may initiate its own investigations of any creditor—whether a depository or nondepository lender—under its independent authority or based on referrals from other agencies as described below. DOJ may file pattern or practice and other fair lending complaints in federal courts. The types of remedies that may be obtained in fair lending litigation include monetary settlements for consumer redress or civil fines, agreements by lenders to change or revise policies, and the establishment of lender fair lending training programs, and other injunctive relief. The five depository institution regulators generally have fair lending oversight responsibilities for the insured depository institutions that they directly regulate, as well as certain subsidiaries and affiliates (see table 1). Along with the enforcement agencies, the Federal Reserve and OTS also have general authority over lenders that may be owned by federally regulated holding companies but are not federally insured depository institutions. Many federally regulated bank holding companies that have insured depository subsidiaries, such as national or state-chartered banks, also may have nonbank subsidiaries, such as mortgage finance companies. Under the Bank Holding Company Act of 1956, as amended, the Federal Reserve has jurisdiction over such bank holding companies and their nonbank subsidiaries. OTS has jurisdiction over the subsidiaries of savings and loan-holding companies, which can include federally insured thrifts as well as noninsured lenders. Depository institution regulators conduct examinations of institutions they oversee to assess their fair lending compliance, including determining whether there is evidence that lenders have violated ECOA or the FHA. Under ECOA, depository institution regulators are required to refer lenders that may have violated the fair lending laws to DOJ if there is reason to believe that a lender has engaged in a pattern or practice of discouraging or denying applications for credit in violation of ECOA. The depository institution regulators are required to notify HUD of any instance where there is reason to believe that a FHA and ECOA violation has occurred which has not been referred to DOJ as a potential ECOA pattern and practice violation. Under the FHA, HUD must provide information to DOJ regarding any complaint in which there is reason to believe that a pattern or practice of violations occurred or that a group of persons has been denied rights under FHA and the matter raises an issue of general public importance. In addition, ECOA granted the depository institution regulators enforcement authority to seek compliance under section 8 of the Federal Deposit Insurance Act and the Federal Credit Union Act. Depository institution regulators have parallel jurisdiction over such matters, even when the matter is referred to the DOJ, because there is a reason to believe that a pattern or practice violation has occurred and DOJ does not defer for administrative enforcement. The agencies must work together to assure there is no duplication of their efforts. The Federal Reserve, OCC, FDIC, and OTS generally may take an administrative enforcement action against an insured depository institution or an institution-affiliated party that is violating, or has violated a law, rule, or regulation. NCUA may take administrative enforcement action against an insured credit union or its affiliated party that is violating or has violated a law, rule or regulation. Depository institution regulators also have cease and desist authority, can order restitution for the victims of discrimination, and issue orders to change or revise lending policies or institute a compliance program or require external audits and compliance with these orders can be enforced in federal court. Moreover, they can impose civil money penalties for each day that a violation continues. HMDA, as amended, requires certain lenders to collect, disclose, and report data on the personal characteristics of mortgage borrowers and loan applicants (for example, their ethnicity, race, and sex), the type of loan or application (for example, if the loan is insured or guaranteed by a federal agency such as the Federal Housing Administration), and certain financial data such as the loan amount and borrowers’ incomes. HMDA’s purposes are to provide the public with loan data that can assist in identifying potential risks for discriminatory patterns and enforcing antidiscrimination laws, help the public determine if lending institutions are meeting the housing credit needs of their communities, and help public officials target community development investment. In 2002, the Federal Reserve, pursuant to its regulatory authority under HMDA, required financial institutions to collect certain mortgage loan pricing data for higher priced loans in response to the growth of subprime lending and to address concerns that minority and other targeted groups were being charged excessively high interest rates for mortgage loans. This requirement was effective on January 1, 2004. Specifically, lenders were required to collect and publicly disclose information about mortgages with annual percentage rates above certain designated thresholds. This 2004 revision to HMDA also was intended to provide depository institution regulators and the public with more information about mortgage lending practices and the potentially heightened risk for discrimination. The data were first reported and publicly disclosed in 2005. HMDA’s data collection and reporting requirements generally apply to certain independent mortgage lenders and federally insured depository institutions as set forth in Regulation C. As shown in figure 1, many more depository institutions than independent mortgage lenders are required to collect and report HMDA data (nearly 80 percent are depository institutions, and 20 percent are independent lenders). Lenders subject to HMDA’s requirements must submit the data by March 1 for the previous calendar year. For example, lenders submitted calendar year 2004 data— the first year in which lenders were required to collect and report mortgage pricing data—to the Federal Reserve by March 1, 2005. Through individual contracts with the other depository institution regulators and HUD, the Federal Reserve collects the HMDA data from all filers, performs limited data validity and quality reviews, checks with lenders as appropriate to clear up discrepancies, and publishes the data in September of each year. Federal enforcement agencies and depository institution regulators use analysis of HMDA data and other information to identify lenders that potentially are at heightened risk of having violated the fair lending laws and target their investigations and examinations accordingly. However, there are several critical limitations in available HMDA data and other data that limit federal fair lending oversight and enforcement efforts. First, HMDA data lack key underwriting data or information, such as borrowers’ credit scores or loan-to-value ratios, which may help explain why lenders may charge relatively higher interest rates or higher fees to some borrowers compared with others. Second, limited data are available on the premortgage loan application process to help determine if loan officers engage in discriminatory practices, such as steering minority applicants to high-cost loans, before a loan application is filed. Third, Regulation B, the regulation that implements the ECOA, generally prohibits lenders from collecting personal characteristic data, such as applicants’ race, ethnicity and sex, for nonmortgage loans, such as small business and credit card loans, which also impedes federal oversight efforts. Requiring lenders to collect and publicly report additional data could benefit federal oversight efforts as well as independent research into potential discrimination in lending, but also would impose additional costs, particularly on smaller institutions with limited recordkeeping systems. Several options, such as limiting additional data collection and reporting requirements to larger lenders, could help mitigate such costs while better ensuring that enforcement agencies and depository institution regulators have critical data necessary to help carry out their fair lending responsibilities. Since 2005, when HMDA mortgage pricing data became available, the Federal Reserve annually has screened the data to identify lenders with statistically significant pricing disparities, based on ethnicity or race, and voluntarily has shared the screening results with other federal and state agencies. First, the Federal Reserve systematically checks the data for errors (such as values that are outside the allowable ranges) or omissions, which may include contacting individual institutions for verification purposes. Second, using statistical analysis, the Federal Reserve matches loans made to minorities with loans made to non-Hispanic whites for each HMDA reporting lender, based on the limited information available in HMDA (such as property type, loan purpose, loan amount, location, date, and borrower income). Third, the Federal Reserve calculates disparities by race and ethnicity for rate spreads (among those loans for which rate spreads were reported) and the proportion of loans that are higher priced (the incidence of higher priced lending). Finally, it identifies those lenders with statistically significant disparities in either the amount of rate spread or the incidence of higher priced lending and develops a list it shares with the other agencies. As shown in table 2, which breaks out the Federal Reserve screening list for 2006 HMDA data, independent lenders that are under the jurisdiction of enforcement agencies accounted for almost half of lenders on the list, although they account for only about 20 percent of all HMDA data reporters. Federally insured and regulated depository institutions such as banks, thrifts, and credit unions, which comprise nearly 80 percent of all HMDA data reporters, accounted for the other half of the outlier list. Federal enforcement agencies generally use the Federal Reserve’s annual screening list, but also conduct independent analyses of HMDA data and other information to develop their own list of outliers, according to agency officials. For example, all of the enforcement agencies said that they incorporate the Federal Reserve’s annual screening list into their own ongoing screening process to identify targets for fair lending investigations. In addition, HUD and FTC officials said they also use other information to identify outliers, including consumer complaint data. Like enforcement agencies, depository institution regulators generally use the Federal Reserve screening list, independent analysis of HMDA data, and other information sources to identify potential outliers and other risk factors. The approaches that the depository institution regulators use may vary significantly. For example, OCC and OTS consider a range of potential risk factors in developing its annual outlier list including, the Federal Reserve’s annual pricing outlier list, independent analysis of mortgage pricing disparities, approval and denial rate disparities, and indications of potential redlining and marketing issues, among others. Other depository regulators, such as the FDIC and the Federal Reserve generally focus on independent analysis of HMDA data and other information to develop outlier lists that are based on statistically significant pricing disparities, although they also may assess other risk factors, including approval and denial decisions and redlining, in assessing fair lending compliance at other lenders under their jurisdiction. FDIC and the Federal Reserve use this analysis to plan and scope their routine fair lending compliance examinations. As shown in table 3, OCC, due to the range of risks that it assesses, identified the largest number of outliers on the basis of its analysis of 2006 HMDA data. We discuss the agencies’ differing approaches in more detail and the potential implications of such differences later in this report. Without HMDA data, enforcement agencies’ and depository institution regulators’ ability to identify outliers and target their investigations and examinations would be limited. According to the depository institution regulators, analysis of HMDA data allows them to focus examination resources on lenders that may have potentially heightened risk of violating fair lending laws. In the absence of HMDA data, enforcement agencies and depository institution regulators would have to cull through loan files or request electronic data to assess a lender’s relative risk of having violated the fair lending laws, which could be a complex and time-consuming process. Although the development of outlier lists on the basis of HMDA data may allow enforcement agencies and depository institution regulators to prioritize fair lending law investigations and examinations, the lack of key information necessary to gauge a borrower’s credit risk, such as underwriting variables, limits the data’s effectiveness. Agency and depository institution regulatory officials have told us that the lack of key mortgage loan underwriting variables, such as borrowers’ credit scores, borrowers’ debt-to-income, or the loan-to-value ratios of the mortgages, is a critical limitation of HMDA data. Underwriting variables are important because they may help explain mortgage lending disparities among what otherwise appear to be similarly situated loan applicants and borrowers of different ethnicity, race, or sex and may help to uncover additional disparities that may not be evident without the underwriting variables. The lack of underwriting data may result in enforcement agencies and depository institution regulators initiating investigations or examinations of lenders that may charge relatively higher interest rates to certain borrowers due to business necessities, such as risk-based pricing that reflects borrower’s creditworthiness. FTC officials also said that the information HMDA data has provided on potential mortgage pricing disparities limits its usefulness for the agency’s enforcement activities. In particular, FTC officials said that reported HMDA data are geared toward assessing mortgage pricing disparities among subprime lenders rather than lenders that may offer prime, conventional mortgages or government-guaranteed (or –insured) mortgages. The FTC officials said that lenders that originate such mortgages generally do so at levels below the thresholds established in HMDA data reporting requirements. Thus, the FTC officials said that Federal Reserve’s annual outlier list is disproportionately represented by independent and other lenders that have specialized in subprime mortgage loans and that the agency’s capacity to assess the potential for discrimination in the prime and government-guaranteed and -insured mortgage markets is limited. To compensate for the lack of key underwriting information included in HMDA data, officials from enforcement agencies and depository institution regulators said that they typically request additional data once an outlier investigation or examination has been initiated. Some officials said that while it generally is easier for larger lenders to provide the data on a timely basis because most of them store it electronically, smaller lenders with paper-based loan documentation may face greater challenges in doing so or may not maintain requested data. When the underwriting data are received, enforcement agency and depository institution regulatory officials said that they use them to determine if statistically significant pricing and denial disparities between mortgage loan applicants and borrowers of different ethnicity, race, or sex still exist. Officials we contacted generally agreed that the annual screening process would be more efficient if they had access to additional underwriting data at the time they screened the HMDA data to identify potential outliers. To try to address the timing issue, in 2009, OCC began a pilot program to obtain this information earlier in the screening process. Specifically, OCC has requested that six large national banks separately provide certain specified underwriting information to the agency at the same time they report HMDA data. The lack of key underwriting information in HMDA data also limits independent research, advocacy, and private plaintiff case development regarding potential discrimination in mortgage lending. Because HMDA data are publicly available, researchers, community groups, and others use them to assess the potential risk for discrimination in the mortgage lending industry and at particular lenders. However, researchers, community groups, and others have stated that the absence of sufficient underwriting data makes determining if lenders had a reasonable basis for mortgage pricing and other disparities—as identified through analysis of HMDA data alone—difficult. As a result, researchers have obtained aggregated mortgage underwriting data from other sources and matched them with HMDA data to assess potential risk for discrimination in mortgage lending. While this approach may help identify the potential risk for discrimination, the underwriting data obtained may not be as accurate as if reported directly by the lenders as part of HMDA. Additionally, FDIC noted that although the data from other sources may reflect commonly accepted standards for underwriting, they may or may not reflect a particular lender’s actual policy. Requiring lenders to collect and publicly report key underwriting data as part of their annual HMDA data submissions would benefit regulatory and independent research efforts to identify discrimination in mortgage lending. With underwriting data included in HMDA data, enforcement agencies and depository institution regulators may be better able to identify lenders that may have disparities in mortgage lending, enabling them to better target investigations and examinations toward the lenders most at risk of having violated the fair lending laws. Moreover, this could help minimize burdens on lenders that do not represent significant risks but are flagged as outliers without the additional data. Similarly, such data might help researchers and others better assess the risk for potential risk for discrimination and independently assess the enforcement of fair lending laws and enhance transparency. For example, researchers, advocacy groups, and potential plaintiffs could use independent analysis of the data to more efficiently monitor discrimination by particular lenders and in the mortgage lending industry generally, which could help inform Congress and the public about compliance with the fair lending laws. Although expanding HMDA data to include certain underwriting data could facilitate regulatory and independent research efforts to assess the potential risk for mortgage discrimination, it would result in additional costs to lenders. As we have reported previously, quantifying such costs in a meaningful way can be difficult for a variety of reasons, such as challenges associated with obtaining reliable data from potentially thousands of lenders that have different cost accounting systems and underwriting policies. According to representatives from a banking trade group and a large lender, the additional costs likely would include expenses associated with (1) establishing information systems or upgrades to collect the data in the proper format, (2) training costs for staff who would be responsible for collecting and reporting the data, and (3) legal and auditing costs to help ensure that the data were accurate and in compliance with established standards. The representative from the large lender said that costs also would be associated with electronically storing and securing additional types of sensitive data that eventually would be made public. Additionally, the official said thousands of employees, who currently look at underwriting, but are not associated with reporting HMDA data, would have to receive fair lending compliance training. Additionally, the official said ensuring compliance with additional public reporting requirements would require additional legal support to certify the accuracy of the additional data. Finally, the costs may be relatively higher for smaller institutions because they may be less likely than larger lenders to collect and store underwriting and pricing data electronically or may not currently retain any pricing data. While certain key underwriting data, such as borrower credit scores, DTI ratios, and LTV ratios, generally would benefit regulatory screening efforts and independent research, advocacy, and private enforcement, they may not be sufficient to resolve questions about potential heightened risk for discrimination by individual lenders or in the industry generally. As part of fair lending investigations and examinations, enforcement agencies and depository institution regulators may request a range of additional underwriting data from lenders, such as detailed product information, mortgage-rate lock dates, overages, additional fees paid, and counteroffer information to help assess the basis for mortgage rate disparities identified through initial analysis of HMDA data. However, according to representatives from a banking trade group and a large lender, requiring them to collect and publicly report such additional underwriting data as part of their annual HMDA data submissions likely would involve additional training, software, compliance, and other associated costs. In addition, according to FTC, overage data may be closely guarded proprietary information, which lenders likely would object to reporting publicly on the grounds that they would represent disclosures to their competitors. Several options could reduce the potential costs associated with requiring lenders to collect and report certain underwriting variables as part of their HMDA data submissions. For example, these options include Large lender requirement—requiring only the largest lenders to provide expanded reporting. According to officials, many of these lenders already collect and store such information electronically. According to published reports, the top 25 mortgage originators accounted for 92 percent of total mortgage loan volume in 2008. Thus, such a requirement would focus on lenders that constitute the vast majority of mortgage lending and minimize costs on smaller lenders, which may not record underwriting in electronic form as most larger lenders reportedly do; Regulatory (nonpublic) reporting of expanded data—requiring all HMDA filers to routinely report underwriting data only to the depository institution regulators in conjunction with HMDA data (as OCC is requiring six large lenders in its pilot study). In so doing, lenders may facilitate depository institution regulators’ efforts to identify potential outliers while minimizing concerns about potential public reporting and compliance costs; and Nonpublic reporting limited to large lenders—requiring only the largest lenders to report expanded data to the depository institution regulators in conjunction with their HMDA data filings. While all of these options would help mitigate additional costs to some degree compared with a general requirement that lenders collect and report publicly underwriting data, each would result in limited or no additional information available to researchers and the public—one of the purposes of the act. In addition, according to DOJ, it is not clear whether the enforcement agencies would have access to the expanded data under the second or third options described above. Nevertheless, any of these options could help enhance depository institution regulators’ ability to oversee and enforce fair lending laws. Without additional routinely provided underwriting data, agencies and depository institution regulators will continue to expend limited resources collecting such information on a per institution basis as they initiate investigations and examinations. Another data limitation that might affect federal efforts to enforce the fair lending laws is the lack of information about the preapplication process for mortgage loans. HMDA data only capture information after a mortgage loan application has been filed and a loan approved or denied. However, fair lending laws apply to the entire loan process. The preapplication process involves lenders’ treatment of potential borrowers before an application is filed, which could affect whether the potential borrower applies for a loan and the type of loan. In a 1996 report on federal enforcement of fair lending laws, we reported that discrimination could occur in the treatment of customers before they actually applied for a mortgage loan. This type of discrimination, which also would be a violation under ECOA, could include spending less time with minority customers when explaining the application process, giving them different information on the variety of products available, or quoting different rates. Subsequent studies by researchers and fair housing organizations have continued to raise concerns about the potential risk for discrimination in mortgage lending during the preapplication phase. The methodology used in these studies often included a technique known as matched pair testing. In matched pair testing, individuals or couples of different ethnicity, race, or sex pose as mortgage loan applicants, visit lenders at different times, and meet with loan officers. The testers, or mystery shoppers, usually present comparable financial backgrounds in terms of assets, income, debt, and credit history, and are asked to request information about similar loan products. For example, in a 2006 study th utilized testers who posed as low-income, first-time home buyers in approximately 250 matched pair tests, researchers found evidence of adverse treatment during the preapplication phase of African-Americans at and Hispanics in the Chicago metropolitan area. Specifically, the study found that African-American and Hispanic testers were less likely than their white counterparts to be given detailed information about re or additional loan products and received less coaching and follow-up communication. However, the authors of the study found that in Los Angeles the treatment of white, African-American, and Hispanic testers generally was similar. Agency officials we contacted said that the use of testers may have certain advantages in terms of identifying potential risks for discrimination by loan officers and other lending officials, but it also has a number of challenges and limitations. For example, officials from FTC, NCUA, and OTS said that testers require specialized skills and training, which results in additional costs. In the early 1990s, FTC officials said that they used testers as a part of their fair lending oversight activities and found the effort not only to be costly but also inconclusive because matching similarly situated borrowers and training the testers was difficult. OCC indicated that it conducted a pilot testing program from 1994 through 1995 and found that indications of differing treatment were weak and involved primarily unverifiable subjective perceptions, such as how friendly the loan officer was to the tester. FTC officials said that current technological advances have made the use of testers even more difficult because loan officers can check a potential loan applicant’s credit scores during the initial meeting. Therefore, these officials said that loan officers may suspect testers are not who they claim to be, thereby raising questions about potential fraud that could affect the loan officer’s interactions with the testers and make any results unreliable. FTC officials also noted that it also was difficult to script identical scenarios because testers often would ask questions, react, and respond differently, which can make test results unreliable. DOJ officials said that they only occasionally used testers in the context of fair lending enforcement due to the difficulties described above and the complexities involved in analyzing lender treatment of testers during the mortgage preapplication process. However, FDIC officials said they were in the early stages of analyzing the costs of using testers and considering whether it would be beneficial to use them in conjunction with their fair lending reviews. While the agencies and depository institution regulators’ generally do not use testers to assess the potential risk for discrimination during the preapplication phase, the alternative strategies that are used have limitations. In general, officials said that they encourage lenders to voluntarily test for fair lending compliance, which may include the use of testers. Officials said that they would review any available analysis when conducting fair lending examinations. However, according to Federal Reserve and OCC officials, this information provided by the use of in- house testers may be protected by the ECOA self-testing privilege, which limits their ability to use it for examination purposes. Federal Reserve officials also noted that few lenders conduct such testing. Depository institution regulators also said that they review customer complaint data; compare the number of applications filed by mortgage loan applicants of different ethnicity, race, or sex and investigate any potential disparities; and review HMDA and additional data to help determine the extent to which minority mortgage loan applicants may have been steered into relatively high-cost loans although they might have qualified for less- expensive alternatives. However, these alternative sources share the same limitations as the use of testers, including the information may provide only an inferential basis for determining if discrimination occurred during the preapplication process and may not be reliable. The depository institution regulators have yet to identify robust data or means of assessing potential discrimination during this critical phase of the mortgage lending process. In a recent report on the financial regulatory system, the Department of the Treasury suggested that surveys of borrowers and loan applicants may be an alternative means of assessing compliance with consumer protection laws, such as the fair lending laws. Without adequate data from the preapplication phase such as through the use of testers, surveys, or alternative means, any fair lending oversight and enforcement will be incomplete because it will include only information on the borrowers that apply for credit and not the larger universe of potential borrowers who sought it. A final data limitation is that depository institution regulators generally do not have access to personal characteristic data (for example, race, ethnicity, and sex) for nonmortgage loans, such as business, credit card, and automobile loans. In a 2008 report, we reported that Federal Reserve Regulation B generally prohibits lenders from requesting and collecting such personal characteristic data from applicants for nonmortgage loans. The Federal Reserve concluded in 2003 that lifting Regulation B’s general prohibition and permitting voluntary collection of data on personal characteristic data for nonmortgage loan applicants, without any limitations or standards, could create some risk that the information would be used for discriminatory purposes. The Federal Reserve also argued that amending Regulation B and permitting lenders to collect such data on a voluntary basis would result in inconsistent and noncomparable data. In the absence of personal characteristic data for nonmortgage loans, we found that agencies tended to focus their oversight activities more on mortgage lending rather than on areas such as automobile, credit card, and business lending that are also subject to fair lending laws. While the interagency procedures that depository institution regulators use to conduct fair lending examinations provide for assessing the potential risk for discrimination in nonmortgage lending, our 2008 report concluded that such procedures had a high potential for error and were time-consuming and costly. Under the interagency procedures, examiners may make use of established “surrogates” to deduce nonmortgage loan applicants’ race, ethnicity, or sex. For example, after consulting with their agency’s supervisory staff, the procedures allow examiners to assume that an applicant is Hispanic based on the last name, female based on the first name, or likely to be an African-American based on the census tract of the address. However, there is the potential for error in the use of such surrogates (for example, certain first names are gender neutral, and not all residents of particular census tract may be African-American). Furthermore, using such surrogates may require examiners to cull through individual nonmortgage loan files. In contrast, HMDA data allow enforcement agencies and depository institution regulators to identify potential outliers through statistical analysis. As we reported, requiring lenders to collect personal characteristic data for nonmortgage loans to facilitate the regulatory supervision and independent research into the potential risk for discrimination would involve additional costs for lenders. These potential costs included information system integration, employee training, and compliance costs. A requirement that lenders collect and publicly report such personal characteristic data likely would need to be accompanied by a requirement that they provide underwriting data to better inform assessments of their lending practices. However, because certain types of nonmortgage lending, such as small business lending, generally are more complicated than mortgage lending, the amount of underwriting data that would need to be reported to allow for informed assessments likely would be comparatively higher as would the associated reporting costs. Similar to the options for expanding HMDA data, several options could facilitate depository institution regulators’ efforts to assess the potential risk for discrimination in nonmortgage lending while mitigating potential lender costs. In particular, lenders could be required to collect such data for certain types of loans, such as small business loans, and make the data available to depository institution regulators rather than publicly report it. Lenders that may represent heightened risks of fair lending violations are subject to relatively less comprehensive federal review of their activities than other lenders. Specifically, the Federal Reserve’s annual analysis of HMDA pricing data and other information suggest that independent lenders and nonbank subsidiaries of holding companies are more likely than depository institutions to engage in mortgage pricing discrimination. While depository institutions may represent relatively less risk of fair lending violations, they generally are subject to a comprehensive oversight program. Specifically, depository institution regulators conduct oversight examinations of most depository institutions that are identified as outliers (more than an estimated 400 such examinations were initiated and largely completed based on the 2005 and 2006 HMDA data analysis) and have established varying policies to conduct routine fair lending compliance oversight of many other depository institutions as well. In contrast, enforcement agencies, which have jurisdiction over independent lenders have conducted relatively few investigations of such lenders that have been identified as outliers over the past several years (for example, HUD and FTC have initiated 22 such investigations since 2005). HUD and FTC also generally do not conduct fair lending investigations of independent lenders that are not viewed as outliers. While the Federal Reserve can conduct outlier examinations of nonbank subsidiaries as it does for state- chartered depository institutions under its jurisdiction, it lacks clear authority to conduct routine consumer compliance, including fair lending, examinations of such nonbank lenders as it does for state member banks. To some degree, these differences reflect differences between the missions of enforcement agencies and depository institution regulators, as well as resource considerations. They also illustrate critical deficiencies in the fragmented U.S. financial regulatory structure, which is divided among multiple federal and state agencies. In particular, the current regulatory structure does not ensure that independent lenders and nonbank subsidiaries receive the same level of oversight as other financial institutions. As we have stated previously, congressional action to reform the financial regulatory system is needed and could, among a range of benefits, help to ensure more comprehensive and consistent fair lending oversight. Based on the Federal Reserve’s annual screening lists, independent mortgage lenders represent relatively heightened risks of fair lending law violations than federally insured depository institutions (see table 4). On the basis of 2004–2007 HMDA data, the Federal Reserve annually identified on average 116 independent mortgage lenders through its pricing screens, which represent about 6 percent of all independent mortgage lenders that file HMDA data. In contrast, the Federal Reserve identified on average 118 depository institutions as outliers during the same period, which represented less than 2 percent of depository institutions that file HMDA data. Independent mortgage lenders and nonbank subsidiaries of holding companies have been a source of significant concern and controversy for fair lending advocates in recent years. As we reported in 2007, 14 of the top 25 originators of subprime and Alt-A mortgages were independent mortgage lenders, and they accounted for 44 percent of such originations. Similarly, we found that 7 of the 25 largest originators of subprime and Alt-A mortgages in 2007 (accounting for 37 percent of originations) were nonbank subsidiaries of bank and savings and loan holding companies. The remaining four originators were depository institution lenders. We also reported that many such high-cost, and potentially heightened-risk mortgages, appear to have been made to borrowers with limited or poor credit histories and subsequently resulted in significant foreclosure rates for such borrowers. In a 2007 report, we found that the market share of subprime lending had grown dramatically among minority and other borrowers and at the expense of the marke mortgage loans insured by the Federal Housing Administration. epository institution regulators oversee fair lending compliance through D targeted examinations of institutions that are identified as outliers through screening HMDA data or routine examinations of the institutions under compliance or safety and soundness examination programs. A key r objective of the depository institution regulators’ fair lending outlie examinations, which generally are to take place within 12–18 months lender being placed on such a list, is to determine if initial indications of heightened fair lending risk warrant further review and potential administrative or enforcement action, which can serve to punish v and deter violations by other lenders. To assess lender compliance, each of the depository institution regulators is to follow the Interagency Fair Lending Examination Procedures, which were established jointly by the depository institution regulators in 1999. While the interagency fair lending procedures are intended to be flexible to meet the specific requirements of each depository institution regulator, they contain general iolators procedures to be included in examinations, according to officials. Specifically, under the guidelines, examiners are to request information from each lender about its underwriting and pricing policies and procedures, the types of loan products offered, and the degree of loan officer discretion in making underwriting and pricing decisions. The depository institution regulators also assess the accuracy of the lender’s HMDA data and request loan underwriting and pricing data. The depository institution regulators also interview lending officials to ensure they properly understand the policies and procedures and discuss any remaining discrepancies that have been identified between mortgage applicants and borrowers of different ethnicity, race, or sex. The examiners also generally review lender files to assess potential discrepancies, particularly when disparities in the data persist after accounting for underwriting variables. Finally, examiners may review the lender’s marketing efforts to check for fair lending violations and assess the lender’s fair lending compliance monitoring procedures and training programs to ensure that efforts are sufficient for ensuring compliance with fair lending laws. Our reviews of completed fair lending outlier examinations indicated general agency compliance with established policies and procedures. Based on our file review, we estimate that the depository institution regulators initiated and largely completed more than 400 examinations of lenders that were identified as outliers on the basis of their analysis of 2005 and 2006 HMDA data. The combined outlier lists for each HMDA data year contained more than 200 lenders. Furthermore, our analysis of examination files generally identified documentation that showed that depository institution regulators followed key procedures in the interagency fair lending guidance, including reviewing underwriting policies, incorporating underwriting data into analysis, and conducting interviews with the lending institution officials. While we identified documentation of these key elements, our review did not include an analysis of the depository institution regulators’ effectiveness in identifying potentially heightened risks for fair lending law violations. However, our review identified certain differences and, in some cases, limitations in the depository institution regulators’ fair lending examination programs, which are discussed in the next section. Depository institution regulators also have established varying policies to help ensure that many lenders not identified through HMDA screening routinely undergo compliance examinations, which may include fair lending components. Such routine examinations may be critical because HMDA data analysis may not detect all potentially heightened risks for violations, and many smaller lenders are not required to file HMDA data. For example, FDIC, Federal Reserve, and OTS officials said they have policies to conduct on-site examinations of lenders for consumer compliance, including fair lending examinations, generally every 12–36 months, primarily depending on the size of the lender and the lender’s previous examination results. Moreover, FDIC, Federal Reserve, and OTS officials said they conduct a fair lending examination in conjunction with every scheduled compliance examination. OCC selects a sample of all lenders—including those that are not required to file HMDA data—for targeted fair lending examinations. OCC officials said its examiners then conduct a more in-depth fair lending examination on these randomly selected institutions, which averages about 30 institutions per year. NCUA generally conducts fair lending examinations on a risk basis, as described later in this report, and generally does not conduct routine fair lending examinations of credit unions that are not viewed as representing potentially heightened risks. While depository institution regulators may identify potentially heightened risks for fair lending violations through their outlier and routine examinations, ECOA requires that they refer all cases for which they have a reason to believe that a pattern or practice of discrimination has occurred to DOJ for further investigation and potential enforcement. Moreover, depository institution regulators must provide notice to HUD whenever they have a reason to believe that a FHA and ECOA violation has occurred and the matter has not been referred to DOJ as a potential pattern or practice violation of ECOA. Therefore, depository institution regulators generally do not have to devote the time and resources necessary to determine whether the federal government should pursue litigation against depository institutions and, if so, conduct such litigation as this is the responsibility of the enforcement agencies. However, depository institution regulators may pursue other actions against lenders for fair lending violations through their administrative authorities including monetary penalties, cease and desist orders to remedy the institution’s systems, policies and procedures, restitution to obtain reimbursement and remedies for harmed consumers and order additional ameliorative measures including creating community or financial literacy programs to assist consumers. Depository institution regulators also may have large examination staffs and other personnel to carry out fair lending oversight. At the depository institution regulators, fair lending oversight generally is housed in offices that are responsible for oversight of a variety of consumer compliance laws and regulations, and the CRA, in addition to the fair lending laws. While ensuring compliance with these laws is challenging as there may be thousands of depository institutions under the jurisdiction of each depository institution regulator, regulators typically have hundreds of examiners to carry out these responsibilities. Moreover, the Federal Reserve, FDIC, OCC, and OTS also employ economists and statisticians to assist in fair lending oversight. NCUA officials said that the agency does not employ statisticians. However, all of the depository institution regulators have attorneys who are involved in supporting fair lending oversight and other consumer law compliance activities. While independent lenders and nonbank subsidiaries of holding companies may represent higher fair lending risks than depository institutions, federal reviews of their activities are limited. According to HUD and FTC officials, since 2005, the agencies have initiated a combined 22 investigations of independent mortgage lenders for potentially heightened risks for fair lending violations. FTC opened more than half, 13, of these investigations in 2009, and these investigations currently are in the initial stages. DOJ has also opened several such investigations, as well as conducting investigations of nonbank subsidiaries of bank holding companies and savings and loan holding companies based on referrals from the depository regulators. Therefore, the enforcement agencies have not conducted investigations, in many cases, where the Federal Reserve’s initial analysis of HMDA data suggests statistically significant mortgage pricing disparities between minority and nonminority borrowers. As discussed previously, the Federal Reserve has identified on average 116 independent lenders annually for mortgage pricing disparities based on its analysis of HMDA data since 2005. While DOJ, HUD and FTC may independently analyze HMDA data to identify lenders that they view as representing the highest risks, and targeting their investigations accordingly, as discussed previously, in the absence of underwriting data the agencies cannot be assured that other lenders with statistically significant differences in mortgage pricing for minority and nonminority borrowers are in compliance with the fair lending laws. HUD and FTC also generally do not initiate investigations of independent lenders that are not viewed as outliers. According to FTC officials, such investigations are not initiated largely due to resource limitations, which are discussed below. Therefore, unlike most depository institution regulators, enforcement agencies do not assess the fair lending compliance of independent lenders through routine oversight. Once DOJ, HUD or FTC identify a particular lender as potentially having violated fair lending laws, their initial investigative efforts generally resemble those of depository institution regulators’ outlier examinations. For example, DOJ, HUD and FTC officials said they request that such lenders provide loan underwriting policies and procedures, information on the types of loan products offered, and information on the extent to which loan officers have discretion over loan approvals and denials or the pricing terms (interest rates or fees) at which an approved loan will be offered. According to agency officials, if loan officers have substantial discretion under lender policies, the risk of discriminatory lending decisions is higher. DOJ, HUD and FTC officials also may request raw HMDA data from lenders and test their accuracy and request loan underwriting or overage data. With this information, DOJ, HUD and FTC officials said they conduct additional statistical analysis to help determine if initial disparities based on ethnicity, race, or sex can be explained by underwriting information. DOJ, HUD and FTC officials also may determine if the lender internally monitors fair lending compliance and interview representatives of the lending institution. Finally, DOJ, HUD and FTC may review loan files. In such reviews, investigators generally try to identify, frequently through statistical analysis, similarly situated applicants and borrowers of different ethnicity, race, or sex to determine if there was any discrimination in the lending process. On the basis of their investigations, HUD DOJ, and FTC determine if sufficient evidence exists to file complaints against the lenders, subject to such investigations, and pursue such litigation where deemed appropriate. Enforcement agencies also have established efforts to coordinate their activities and prioritize investigations of independent lenders and other institutions, as necessary. For example, enforcement agency officials said that they meet periodically to discuss investigations and have shared information derived from investigations. According to DOJ, the agency, FTC and HUD also have a working group that meets on a bimonthly basis to discuss HMDA pricing investigations on nonbank lenders and to discuss issues common to the three enforcement agencies in their shared oversight of nonbank lenders. The differences in the enforcement agencies’ capacity to pursue potential risks for violating the fair lending laws, relative to the depository institution regulators, results in part from resource considerations. For example, in a 2004 report, we assessed federal and state efforts to combat predatory lending (practices including deception, fraud, or manipulation that a mortgage broker or lender may use to make a loan with terms that are disadvantageous to the borrower), which can have negative effects similar to fair lending violations. We questioned the extent to which FTC, as a federal enforcer of consumer protection laws for nonbank subsidiaries, had the capacity to do so. We stated that FTC’s mission and resource allocations were focused on conducting investigations in response to consumer complaints and other information rather than on routine monitoring and examination responsibilities. Our current work also indicates that resource considerations may affect the relative capacity of enforcement agencies to conduct fair lending oversight. For example, at HUD, responsibility for conducting such investigations lies with the Fair Lending Division in the Office of Systemic Investigations (OSI) in its Office of Fair Housing and Equal Opportunity that was established in 2007. OSI currently has eight staff—including four equal opportunity specialists and two economists. At FTC and DOJ, the units responsible for fair lending oversight each have fewer than 50 staff, and have a range of additional consumer protection law responsibilities. FTC’s Division of Financial Practices (DFP) has 39 staff, including 27 line attorneys, and is responsible for fair lending enforcement as well as the many other consumer protection laws in the financial services arena, such as the Fair Debt Collection Practices Act and Section 5 of the FTC Act, which generally prohibits unfair or deceptive acts or practices. In addition, economists and research analysts from FTC’s Bureau of Economics assist in DFP investigations, particularly with data analysis. At DOJ, the unit responsible for fair lending investigations, the Housing and Civil Enforcement Section, includes 38 staff attorneys with a range of enforcement responsibilities, including enforcing laws against discrimination in rental housing, insurance, land use, and zoning, as well as two economists and one mathematical statistician. In the President’s proposed budget for fiscal year 2010, he requested additional resources for fair lending oversight. For example, HUD’s proposed budget includes $4 million for additional staff to address abusive and fraudulent mortgage practices and increase enforcement of mortgage and home purchase settlement requirements. This budget request would increase staffing for HUD’s Office of Fair Housing and Equal Opportunity to expand fair lending efforts and for the Office of General Counsel to handle increased fair lending and mortgage fraud enforcement among other initiatives. Further, the budget request includes an additional $1.3 million to fund increases for DOJ’s Housing and Civil Enforcement Section’s fair housing and fair lending enforcement, including five additional attorney positions. In its fiscal year 2010 budget request, FTC requested nine additional full-time equivalent staff for financial services consumer protection law enforcement, which officials noted include fair lending. While the nonbank subsidiaries of bank holding companies also may pose heightened risks of fair lending violations, the Federal Reserve has interpreted its authority under the Bank Holding Company Act, as amended by the Gramm-Leach Bliley Act, as limiting its examination authority of such entities compared with the examination authority that it and other depository institution regulators conduct oversight of depository institutions. The Federal Reserve interprets its authority as permitting it to conduct consumer compliance oversight of nonbank subsidiaries when there is evidence of potentially heightened risks for violations, such as through annual analysis of HMDA data or other sources of information such as previous examinations or consumer complaints. However, pursuant to a 1998 policy, Federal Reserve examiners are prohibited from conducting routine consumer compliance examinations of nonbank subsidiaries. According to FTC, while the agency also has authority over nonbank subsidiaries, its capacity to oversee them is limited due to resource limitations as discussed earlier. Due to the risks associated with nonbank subsidiaries, in 2004, we suggested that Congress consider (1) providing the Federal Reserve with the authority to routinely monitor and, as necessary, examine nonbank subsidiaries of bank holding companies to ensure compliance with federal consumer protection laws and (2) giving the Federal Reserve specific authority to initiate enforcement actions under those laws against these nonbank subsidiaries. While Congress has not yet acted on our 2004 suggestion, Federal Reserve officials said that they have implemented a variety of steps within their authority to strengthen consumer compliance supervision, including fair lending supervision of nonbank subsidiaries since our 2004 report. In particular, they said the Federal Reserve created a unit in 2006 dedicated to consumer compliance issues associated with large, complex banking organizations, including their nonbank subsidiaries. In addition, Federal Reserve officials said examiners are to conduct consumer compliance risk assessments of nonbank subsidiaries in addition to their supervisory responsibilities for bank holding companies. Based on these risk assessments, the officials said examiners may conduct a targeted examination on a case-by-case basis. Furthermore, when a nonbank subsidiary has been identified as a potential outlier, Federal Reserve officials said similar to oversight practices for state member banks, they assess the entity for risk of pricing discrimination and may conduct additional statistical pricing reviews through the use of HMDA data and other information to better understand its potential risks. During such reviews, Federal Reserve officials said that examiners closely review the lender’s policies and procedures and with the approval of the Director of Consumer Compliance also may conduct loan file reviews if there is potential evidence of a fair lending violation. Federal Reserve officials said that they have referred one nonbank subsidiary for pricing discrimination to DOJ in recent years. We also note that in 2007 the Federal Reserve began a pilot program with OTS, FTC, and state banking agencies to monitor the activities of nonbank subsidiaries of bank and savings and loan holding companies. OTS has jurisdiction over savings and loan holding companies and any of their nonbank subsidiaries. During the pilot program, agency officials said that they conducted coordinated consumer compliance reviews of several nonbank subsidiaries and related entities, such as mortgage brokers that may be regulated at the state level, to assess their compliance with various federal and state consumer protection laws, including fair lending laws. According to the Federal Reserve, OTS, and FTC officials, they recently completed their reviews of the pilot study and are evaluating how the results might be used to better ensure consumer compliance, including fair lending oversight, of nonbank subsidiaries. While the Federal Reserve’s process for reviewing nonbank subsidiaries identified as potentially posing fair lending risks and the pilot study are important steps, its lack of clear authority to conduct routine examinations of nonbank subsidiaries for compliance with all consumer protection laws appears to be significant. Given the limitations in HMDA data described in this report, agency screening programs may have limited success in detecting fair lending violations. According to a Federal Reserve official, many potential violations of the fair lending laws and subsequent referrals of state-chartered banks are identified through routine examinations rather than the outlier examination process. Without clear authority to conduct similar routine examinations of nonbank subsidiaries for their fair lending compliance, the Federal Reserve may not be in a position to identify as many potential risks for fair lending violations at such entities as it does through the routine examinations of state member banks. The relatively limited fair lending oversight of independent lenders and nonbank subsidiaries reflect the fragmented and outdated U.S. financial regulatory system. As described in our previous work, the U.S. financial regulatory structure, which is divided among multiple federal and state agencies, evolved over 150 years largely in response to crises, rather than through deliberative legislative decision-making processes. The resulting fragmented financial regulatory system has resulted in significant gaps in federal oversight of financial institutions that represent significant risks. In particular and consistent with our discussion about fair lending oversight, federal depository institution regulators lacked clear and sufficient authority to oversee independent and nonbank lenders. Congress and the administration currently are considering a range of proposals to revise the current fragmented financial regulatory system. In our January 2009 report, we stated that reforms urgently were needed and identified a framework for crafting and evaluating regulatory reform proposals that consisted of characteristics that should be reflected in any new regulatory system. These characteristics include clearly defined and relevant regulatory goals— to ensure that depository institution regulators effectively can carry out their missions and be held accountable; a systemwide focus—for identifying, monitoring, and managing risks to the financial system regardless of the source of the risk; consistent consumer and investor protection—to ensure that market participants receive consistent, useful information, as well as legal protections; and consistent financial oversight—so that similar institutions, products, risks, and services are subject to consistent regulation, oversight, and enforcement. Any regulatory reform efforts, consistent with these characteristics, should include an evaluation of ways in which to ensure that all lenders, including independent lenders and nonbank subsidiaries, will be subject to similar regulatory and oversight treatment for safety and soundness and consumer protection, including fair lending laws. In the absence of such reforms, oversight and enforcement of fair lending laws will continue to be inconsistent. Although depository institution regulators’ initial activities to assess evidence of potentially heightened risks for fair lending violations generally have been more comprehensive than those of enforcement agencies, their oversight programs also face challenges that are in part linked to the fragmented regulatory structure. While depository institution regulators have taken several steps to coordinate their fair lending oversight activities where appropriate, the effects of these efforts have been unclear. Each depository institution regulator uses a different approach to screen HMDA data and other information to identify outliers, and the management of their outlier examination programs and the documentation of such examinations varied. For example, FDIC, Federal Reserve, and OTS described centralized approaches to managing their outlier programs while NCUA’s and OCC’s management approaches were more decentralized. In contrast to other depository institution regulators, OCC’s outlier examination documentation standards and practices were limited, although the agency recently has taken steps to improve such documentation. Finally, depository institutions under the jurisdiction of FDIC, Federal Reserve, and OTS were far more likely to be subject to referrals to DOJ for potentially being at heightened risk for fair lending violations than those under the jurisdiction of NCUA and OCC. These differing approaches raise questions about the consistency and effectiveness of the depository institution regulators’ collective fair lending oversight efforts, which are likely to persist so long as the fragmented regulatory structure remains in place. Given the current fragmented structure of the federal regulatory system, we have stated that collaboration among agencies that share common responsibilities is essential to ensuring consistent and effective supervisory practices. Such collaboration can take place through various means including developing clear and common outcomes for relevant programs, establishing common policies and procedures, and developing mechanisms to monitor and evaluate collaborative efforts. In keeping with the need for effective collaboration, depository institution regulators as well as enforcement agencies have taken several steps to establish common policies and procedures and share information about their fair lending oversight programs. These steps include the following: Since 1994, depository institution regulators and enforcement agency officials have participated in an Interagency Fair Lending Task Force. The task force was established to develop a coordinated approach to address discrimination in lending and adopted a policy statement in 1994 on how federal regulatory and enforcement agencies were to conduct oversight and enforce the fair lending laws. Federal officials said that the task force, which currently meets on a bimonthly basis, continues to allow depository institution regulators and enforcement agencies to exchange information on a range of common issues, informally discuss fair lending policy, and confer about current trends or challenges in fair lending oversight and enforcement. For example, officials said that depository institution regulators and enforcement agencies may discuss how they generally approach fair lending issues, such as outlier screening processes. According to depository institution regulators, because the task force is viewed as an informal information-sharing body, it has not produced any reports on federal fair lending oversight and no meeting minutes are kept. Moreover, officials said that economists from the depository institution regulators contact each other separately from the task force to discuss issues including their screening processes for high-risk lenders and emerging risks. According to FDIC, attorneys from different agencies also contact each other about specific legal issues and share relevant research. DOJ officials indicated that they regularly discuss with attorneys from the depository institution regulators, HUD and FTC specific legal issues. As discussed previously, in 1999, the depository institution regulators jointly developed interagency fair lending procedures. According to depository institution regulatory officials, they are in the process of revising and updating the procedures through the Federal Financial Institutions Examination Council Consumer Compliance Task Force. They expect the updated examination guidelines to be finalized and adopted in 2009 with potential enhancements to pricing, applicant steering, mortgage broker, and redlining sections of the guidance. While the Federal Reserve annually reviews HMDA data to identify lenders at potentially heightened risk for fair lending violations related to mortgage pricing disparities, each depository institution regulator uses its own approach to identify potential outliers. Specifically, FDIC and Federal Reserve examination officials generally develop their own outlier lists on the basis of statistically significant pricing disparities. FDIC and the Federal Reserve’s approaches differ from one another and from the Federal Reserve’s annual mortgage pricing outlier list that is distributed to all agencies. FDIC officials said that the agency’s approach to developing its pricing outlier list is geared toward the smaller state- chartered banks that primarily are under its jurisdiction. Federal Reserve officials said they supplement the annual mortgage pricing outlier list for lenders under their jurisdiction with additional information. For example, the officials said this information includes assessments of the discretion and financial incentives that loan officers have to make mortgage pricing decisions, the lenders’ business models, and past supervisory findings. As we discussed earlier, both FDIC and the Federal Reserve noted that they also screen HMDA data and other information to assess other risk factors, such as redlining. However, such screening is done in conjunction with their routine examination processes rather than their outlier examination processes. In contrast, OCC and OTS generally consider a broader range of potential risk factors beyond pricing disparities in developing their annual outlier lists. According to OCC officials, in addition to the Federal Reserve’s outlier list and OCC’s independent analysis of mortgage pricing disparities, it also conducts screening relating to approval and denial decisions, terms and conditions, redlining and marketing. Similarly, OTS officials said they use other risk factors, such as mortgage loan approval and denial decisions, redlining and steering, beyond mortgage pricing disparities, in developing their outlier lists. NCUA does not currently conduct independent assessments of HMDA data as it does not have any statisticians to do so, according to an agency official. Instead, NCUA officials said that the agency prioritizes fair lending examinations based on several factors, which include the Federal Reserve’s annual pricing screening list, complaint data, safety and soundness examination findings, discussion with regional officials, and budget factors. Over the past several years, NCUA has conducted approximately 25 fair lending examinations each year, and these examinations are generally divided equally among its five regional offices. NCUA’s Inspector General reported in 2008 that analytical efforts for identifying discrimination in lending were limited, but the agency was developing analyses to screen for potential discriminatory lending patterns, which were expected to be operational in 2009. There may be a basis for depository institution regulators to develop fair lending outlier screening processes that are suited towards the specific types of lenders under their jurisdiction. Nevertheless, the use of six different approaches among the five depository institution regulators (the Federal Reserve’s annual analysis plus the unique approach at each regulator) to assess the same basic data source raises questions about duplication of effort and the inefficient use of limited oversight resources. In this regard, we note that OCC’s independent analysis of HMDA data in 2007 identified twice as many national banks and other lenders under its jurisdiction with mortgage pricing disparities as the Federal Reserve did in its mortgage pricing analysis of lenders under OCC’s jurisdiction. With a continued division of fair lending oversight responsibility among multiple depository institution regulators, opportunities to develop a coordinated approach to defining and identifying outliers and better prioritize oversight resources may not be realized. The depository institution regulators differed in the extent to which they centrally manage examination processes, documentation, and reporting. FDIC, the Federal Reserve, and OTS officials described a more centralized (headquarters-driven) approach to ensuring that outlier examinations are initiated and necessary activities carried out. Headquarters officials from these agencies described approaches they used to ensure that fair lending examiners and other staff in regional and district offices conduct outlier examinations, document examination findings and recommendations, and follow up on recommendations. In addition to running the HMDA data outlier screening programs, FDIC, the Federal Reserve and OTS officials said that they held ongoing meetings with headquarters and district staff to discuss outlier examinations and their findings. FDIC officials said that the agency has developed a process for conducting reviews of completed outlier and routine examinations to assess if the agency is consistently complying with the interagency fair lending examination procedures. Officials from FDIC, the Federal Reserve, and OTS also said that headquarters staffs were involved in conducting legal and other analyses needed to determine if a referral should be made to DOJ for a potential pattern or practice violation. FDIC, OTS, and the Federal Reserve have developed fair lending examination documentation and reporting standards and practices designed to facilitate the centralized management of their outlier programs. Such examination documentation and reporting standards generally are consistent with federal internal control policies that require that agencies ensure that relevant, reliable, and timely information be readily available for management decision-making and external reporting purposes. For example, FDIC staff generally prepare summary memorandums that describe critical aspects of outlier examinations. These memorandums discuss when examinations were initiated and conducted; the initial focal point (such as mortgage interest rate disparities in conventional loans between African-American and non-Hispanic white borrowers) identified through HMDA data analysis; the methodologies used to assess if additional evidence of potential lending discrimination existed for each focal point(s); and any findings or recommendations. According to an FDIC headquarters official, FDIC headquarters manage the outlier reviews in collaboration with regional and field office staff. In addition to the outlier reviews, summary documents are reviewed on an ongoing basis to monitor the nationwide implementation of the fair lending examination program and allow the agency to assess the extent to which lenders are implementing examination recommendations. Additionally, in 2007, FDIC required that examiners complete a standardized fair lending scope and summary memorandum to help ensure implementation of a consistent approach to documenting fair lending reviews. OTS also generally requires its examiners to prepare similar summary documentation of outlier examinations, which agency officials said are used to help manage the nationwide implementation of their outlier examination programs. The Federal Reserve has developed management reports, which track major findings of outlier examinations and potentially heightened risks for violations of the fair lending laws and referrals to DOJ, to ensure that fair lending laws are consistently enforced and examiners receive appropriate legal and statistical guidance. Federal Reserve officials said that the Reserve Banks generally maintain documentation of the outlier examinations in paper or electronic form; however, electronic versions of examination reports generally are available at the headquarters level. While NCUA and OCC officials also indicated that headquarters staff performed critical functions, such as HMDA data screening or developing policies for conducting fair lending examinations, they generally described more decentralized approaches to managing their outlier examination programs. For example, OCC officials said that the agency’s supervisory offices are responsible for ensuring that examinations are initiated on time, key findings are documented, and recommendations are implemented. Among other responsibilities, OCC headquarters staff provide overall policy and supervisory direction, develop appropriate responses to emerging fair lending issues, and provide ongoing assistance to field examiners as needed, and assist in determining whether referrals or notifications to other agencies are necessary or appropriate. OCC also conducts quality assurance reviews, which included an audit of fair lending examinations at large banks, which was completed in 2007. NCUA officials said that headquarters staff are involved in managing the selection of the approximately 25 fair lending examinations that are conducted each year, but regional staff play a significant role in selecting credit unions for examination on a risk basis. NCUA officials said that they do not routinely monitor regional compliance with the interagency fair lending examination procedures as this is largely the responsibility of regional officials. However, NCUA’s staff at their central office would randomly review a select number of the fair lending examinations that are sent from the regional offices to ensure compliance with established procedures. NCUA’s examination files generally included a single summary document that described scope, key findings, and recommendations made, if any, which facilitated our review. However, due to OCC’s approach to documenting outlier examinations, we faced certain challenges in assessing the agency’s compliance with its examination schedules and procedures for the period we reviewed. For example, OCC was unable to verify when outlier examinations were started for most of their large banks. OCC officials told us that part of the reason for this was because OCC conducts continuous supervision of large banks, and the database for large banks does not contain a field for examination start and end dates. Also, the documentation of outlier examination methodologies and findings and recommendations was not readily available or necessarily summarized in memorandums for management’s review. Rather, a variety of examination materials contained critical items and retrieving such documentation from relevant information systems was time consuming. In 8 of the 27 OCC outlier examinations we reviewed, the documentation did not identify examination activities undertaken to assess lenders’ fair lending compliance as being part of the outlier examination program. In 2007, an OCC internal evaluation of its large bank fair lending program found that key aspects of the agency’s risk-assessment process, such as its methodology, data analysis, and meetings with bank management were not well documented. However, the report also found that OCC fair lending examinations of large banks generally followed key interagency examination procedures and that adequate documentation supported the conclusions reached. The evaluation recommended that OCC develop a common methodology to assess fair lending risk and better documentation standards, which the agency is in the process of implemeting. In May 2009, OCC officials told us that they recently had taken steps to improve the ability to retrieve data from their documentation system. For example, for their database for midsize and community banks, OCC added a keyword search function to identify key information, such as the HMDA outlier year on which the examination was based. However, it is too soon to tell what effects these changes will have on OCC’s fair lending examination documentation standards and practices. Unless these changes begin to address documentation limitations that we and OCC’s internal evaluation identified, OCC management’s capacity to monitor the implementation, consistency and reporting of the agency’s fair lending examination program will be limited. There are significant differences in the practices that the depository institution regulators employ to make referrals to DOJ and in the number of referrals they made. In response to a previous GAO recommendation, DOJ provided guidance to the federal depository institution regulators on pattern or practice referrals in 1996. The DOJ memorandum identified criteria for determining if an ECOA violation identified in a depository institution regulatory referral is appropriate for DOJ’s further investigation for potential legal action or returned to the referring agency for administrative resolution. These criteria include the potential for harm to members of a protected class, the likelihood that the practice will continue, if the practice identified was a technical violation, if the harmed members can be fully compensated without court action, and the potential impact of federal court action, including the payment of damages to deter other lenders engaged in similar practices. Moreover, DOJ officials told us that they encourage depository institution regulators to consult with them on potential referrals. While DOJ has issued long-standing guidance on referrals, depository institution regulatory officials indicated that different approaches may be used to determine if initial indications of potential risks for fair lending violations identified through HMDA screening warranted further investigation or referral to DOJ. For example, OCC and OTS officials said that they considered a range of data and information and conducted analyses before making a referral to DOJ. According to agency officials, this information might include statistical analysis of HMDA and loan underwriting data, reviews of policies and procedures, and on-site loan file reviews. OCC and OTS officials said that staff routinely conduct such file reviews as one of several approaches to assessing a lender’s fair lending compliance and likely would not refer a case without conducting such reviews. In contrast, while FDIC and the Federal Reserve may also conduct file reviews to extract data and/or confirm an institution’s electronic data, officials said that statistical analyses of HMDA and underwriting and pricing data could and have served as the primary basis for concluding that lenders may have engaged in a pattern or practice violation of ECOA and as the basis for making referrals to DOJ. NCUA generally relies on on-site examinations and loan file reviews to reach conclusions about lender compliance with the fair lending laws and, as mentioned earlier, does not conduct independent statistical reviews of credit unions’ HMDA data. OCC officials said referrals for potential fair lending violations are not insignificant matters, either for the lender or DOJ, and they have established processes to ensure that any such referrals are warranted. As shown in figure 2, the number of referrals varied by depository institution regulator. FDIC accounted for 91 of the 118 referrals (77 percent) that depository institution regulators made to DOJ from 2005 through 2008. In contrast, OCC made one referral during this period and NCUA none. OCC officials said that since 2005 their examiners have identified technical violations of the fair lending laws and weaknesses in controls that warranted attention of bank management, but that the identification of potential pattern or practice violations was “infrequent.” NCUA officials said their examiners had reported technical violations but had not identified any pattern or practice violations, and thus made no referrals to DOJ. From 2005 to 2008, we found that about half of the referrals that the depository institution regulators made resulted from marital status-related violations of ECOA—such violations can include lender policies that require spousal guarantees on loan applications. FDIC accounted for about 82 percent of such referrals (see fig. 3). DOJ officials said they generally returned such referrals to the depository institution regulators for administrative or other resolution. The one institution that OCC referred to DOJ in 2008 involved a marital status violation, which DOJ subsequently returned to OCC for administrative resolution. resolution. FDIC noted that DOJ does not opine on a matter when a matter is deferred to the depository institution regulator for administrative enforcement. Specifically, DOJ does not make its own determination of whether there was discrimination or whether there was a pattern or practice warranting the referral. The deferral of a matter is simply an agreement that the depository institution regulator is in a better position to resolve the violation through administrative measures. other key areas (see table 5). Specifically, in the 110 outlier examinations that we reviewed that were conducted by these three depository institution regulators, the regulators identified potential pattern or practice violations based on statistically significant pricing disparities in 11 cases, or 10 percent of the examinations, and referred the cases to DOJ. DOJ indicated that several of these referrals had been returned to the depository institution regulators for administrative enforcement, while the remaining referrals are still in DOJ’s investigative process. While it is difficult to fully assess the reasons for the differences in referrals and outlier examination findings across the depository institution regulators without additional analysis, they raise important questions about the consistency of fair lending oversight. In particular, depository institutions under the jurisdiction of OTS, FDIC, and the Federal Reserve appear to be far more likely to be the subject of fair lending referrals to DOJ and potential investigations and litigation than those under the jurisdiction of OCC and NCUA. Under the fragmented regulatory structure, differences across the depository institution regulators in terms of their determination of what constitutes an appropriate referral as well as fair lending examination findings are likely to persist. Enforcement agency litigation involving the fair lending laws has been limited in comparison with the number of lenders identified through analyses of HMDA data and other information. For example, since 2005, DOJ and FTC have reached settlements in eight cases involving alleged fair lending violations while HUD has not yet reached any settlements. Among other factors, resource considerations may account for the limited amount of litigation involving potential fair lending violations. Federal officials also identified other challenges to fair lending oversight and enforcement, including a complex and time-consuming investigative process, difficulties in recruiting legal and economic staff with fair lending expertise, and ECOA’s 2-year statute of limitations for civil actions initiated by DOJ under its own authority or on the basis of referrals from depository institution regulators. According to HUD officials, the department has filed two Secretary- initiated complaints against lenders alleging discrimination in their lending practices. The officials said that HUD is currently considering whether, pursuant to FHA, to issue Charges of Discrimination in administrative court in these two matters. If HUD decides to issue such charges in administrative court, any party may elect to litigate the case instead in federal district court, in which case DOJ assumes responsibility from HUD for pursuing litigation. Since 2005, FTC under its statutory authority has filed complaints against two mortgage lenders in federal district court for potential discriminatory practices and has settled one of these complaints while the other one is pending. FTC’s settlement dated December 17, 2008, with Gateway Funding Diversified Mortgage Services, L.P. (Gateway) and related entities provides an example of potential fair lending law violations and insights into federal enforcement activities. FTC filed a complaint against Gateway on the basis of an alleged ECOA pricing violation that originated in prime, subprime, and government loans such as FHA-insured mortgage loans. According to FTC, Gateway’s policy and practice of allowing loan officers to charge discretionary overages that included higher interest rates and higher up-front charges resulted in African-Americans and Hispanics being charged higher prices because of their race or ethnicity. FTC alleged that the price disparities were substantial, statistically significant, and could not be explained by factors related to underwriting risk or credit characteristics of the mortgage applicants. Under the terms of the settlement, Gateway agreed to pay $2.9 million in equitable monetary relief for consumer redress ($2.7 million of which was suspended due to the company’s inability to pay); establish a fair lending monitoring program specifically designed to detect and remedy fair lending issues; and establish, implement, operate, and maintain a fair lending training program for employees. The limited litigation involving potential fair lending violations reflects the limited number of investigations these agencies have initiated since 2005. From 2005 through 2009, HUD and FTC, as discussed previously, initiated 22 investigations of independent lenders at potentially heightened risk for fair lending law violations. Resource constraints may affect their capacity to file and settle fair lending related complaints. For example, FTC officials said that most of their staff who work on fair lending issues were dedicated to pursing the litigation associated with the three investigations that the agency opened from 2005 through 2008.As two of these three investigations have now been settled or concluded, additional staff resources are available to pursue evidence of potential violations at other lenders under the agency’s jurisdiction. Since 2005, DOJ has filed complaints and settled complaints in seven cases involving potential violations of the fair lending laws (see table 6). These cases involved allegations of racial and national origin discrimination, sexual harassment against female borrowers, and discrimination based on marital status in the areas of loan pricing and underwriting, and redlining. One of these settlements—United States. v. First Lowndes Bank, Inc— involved an allegation that a lender had engaged in mortgage pricing discrimination, which has been the basis of several depository institution regulators’ referrals in recent years. According to DOJ officials, the enforcement actions for mortgage lending result both from investigations that were initiated under the department’s independent authority and from referrals from depository institution regulators. As shown in table 6, five of the seven fair lending cases settled were initiated under DOJ’s independent investigative authority; one was based on a referral from FDIC, and one from the Federal Reserve. However, DOJ officials said that there are investigations based on other referrals from depository institution regulators that are ongoing, including one case in pre-suit negotiations based on a referral from the Federal Reserve and another case that arose from a FDIC referral. According to officials from federal enforcement agencies, investigations involving allegations of fair lending violations can be complex and time- consuming. For example, DOJ officials said that if the department decided to pursue an investigation based on a referral from a depository institution regulator, such an investigation may be broader than the information contained in a typical referral. DOJ officials said that referrals typically were based on a single examination, which may cover a limited period (such as potential discrimination based on an analysis of HMDA data for a particular year). They also pointed out that the standard for referral to DOJ for the depository institution regulators is “reason to believe” that a discriminatory practice is occurring. DOJ officials said that to determine if a referred pattern or practice of discrimination warrants federal court litigation, they may request additional HMDA and underwriting data for additional years and analyze them. Furthermore, they said that lenders often hire law firms that specialize in fair lending to assist the lender in its response to the department’s investigation. DOJ officials said that these firms may conduct their own analysis of the HMDA and underwriting and pricing data and, as part of the investigation process, offer their views about why any apparent disparities may be explained. Depending on the circumstances, this process can be lengthy. According to a 2008 report by FDIC’s Inspector General, fair lending referrals that are not sent back to the referring agency for further review may be at DOJ for years before they are resolved. Additionally, HUD officials said that their initial investigations into evidence of potential fair lending violations may detect additional evidence of discrimination that also must be collected and reviewed. According to officials from an enforcement agency and available research, another challenge that complicates fair lending investigations involves lending discrimination based on disparate impact, which we also raised as an enforcement challenge in our 1996 report. As discussed in the Interagency Policy Statement on Discrimination in Lending, issued in 1994, fair lending violations may include allegations of disparate treatment or disparate impact. It is illegal for a lender to treat borrowers from protected classes differently, such as intentionally charging disproportionately higher interest rates based on race, sex, or national origin that are not related to creditworthiness or other legitimate considerations. It also is illegal for a lender to maintain a facially neutral policy or practice that has a disproportionately adverse effect on members of a protected group for which there is no business necessity that could not be met by a less discriminatory alternative. For example, a lender might have a blanket prohibition on originating loans below a certain dollar threshold because smaller loans might be more appealing to borrowers with limited financial resources and therefore represent higher default risks. While such a policy might help protect a lender against credit losses, it also could affect minority borrowers disproportionately. Furthermore, alternatives other than a blanket prohibition might mitigate potential losses, such as reviewing applicant credit data. It may be difficult for enforcement agencies or depository institution regulators to evaluate lender claims that they have a business necessity for particular policies and identify viable alternatives that would not have a disparate impact on targeted groups. However, an official from the Federal Reserve told us that the potential for disparate impact can be assessed through its examination and other oversight processes. The official said the Federal Reserve has evaluated lenders’ policies to assess the potential disparate impact and has referred at least one lender to DOJ based on the disparate impact theory. DOJ and FTC officials also said that recruiting and retaining staff with specialized expertise in fair lending laws can be challenging. Both DOJ and FTC officials said that recruiting attorneys with expertise in fair lending investigations and litigation was difficult, and employees who develop such expertise may leave for other positions, including at other federal depository institution regulators or quasi-governmental agencies that offer higher compensation. Additionally, DOJ and FTC officials said that recruiting and retaining economists who have expertise in analyzing HMDA data and underwriting data to detect potential disparities in mortgage lending can be difficult. FTC officials said that due to the recent departure of economists to depository institution regulators, the agency increasingly relies on outside vendors to provide such economic and statistical expertise. Finally, some federal enforcement agency and depository institution regulators cited ECOA’s statute of limitations as potentially challenging for enforcement activities. Currently, ECOA’s statute of limitations for referrals to DOJ from the depository institution regulators and for actions brought on DOJ’s own authority requires that no legal actions in federal court be initiated more than 2 years after the alleged violation occurred. According to federal officials, the ECOA statute of limitations may limit their activities because HMDA data generally are not available for a year or more after a potential lending violation has occurred. Consequently, federal agencies and regulators may have less than a year to schedule an investigation or examination, collect and review additional HMDA and underwriting and pricing data, and pursue other approaches to determine if a referral to DOJ would be warranted. According to OTS officials, an extension of the statue of limitations beyond its current 2-year period would provide valuable additional time to conduct the detailed analyses that is necessary in fair lending cases. Accordingly, FDIC has recommended that Congress extend ECOA’s statute of limitations to 5 years. DOJ officials noted that they would not be averse to the statute of limitations being extended. While federal officials said that there are options to manage the challenges associated with the ECOA statutes of limitations, these options have limitations. For example, some enforcement officials said that ECOA violations may also be investigated under FHA, which has longer statutes of limitations. Specifically, under FHA, DOJ may bring an FHA action based on a pattern or practice or for general public importance within 5 years for civil penalties and within 3 years for damages; there is no limitation period for injunctive relief. However, not all ECOA violations necessarily constitute FHA violations as well. Enforcement agency officials also said that in some cases they may be able to obtain tolling agreements as a means to manage the ECOA and FHA statutes of limitations. Tolling agreements are written agreements between enforcement agencies, or private litigants, and potential respondents, such as lenders subject to investigations or examinations for potential fair lending violations, in which the respondent agrees to extend the relevant statute of limitations so that investigations and examinations may continue. Enforcement agency officials said that lenders often agree to tolling agreements and work with the agencies to explain potential fair lending law violations, such as disparities in mortgage pricing. The officials said that the lenders have an incentive to agree to tolling agreements because the enforcement agencies otherwise may file wide- ranging complaints against them on the basis of available information shortly before the relevant statute of limitations expires. However, enforcement officials said it is not always possible to obtain lenders’ consent to enter into tolling agreements, and our review of fair lending examination files confirmed this assessment. We found several instances in which depository institution regulators had difficulty obtaining tolling agreements. Because federal enforcement efforts to manage ECOA’s 2- year statute of limitations may not always be successful, the agencies’ capacity to thoroughly investigate potential fair lending violations and take appropriate corrective action in certain cases may be compromised. Federal enforcement agencies and depository institution regulators face challenges in consistently, efficiently, and effectively overseeing and enforcing fair lending laws due in part to data limitations and the fragmented U.S. financial regulatory structure. HMDA data, while useful in screening for potentially heightened risks of fair lending violations in mortgage lending, are limited because they currently lack the underwriting data needed to perform a robust analysis. While requiring lenders to collect and report such data would impose additional costs on them, particularly for smaller institutions, the lack of this information compromises the depository institution regulators’ ability to effectively and efficiently oversee and enforce fair lending laws. Such data also could facilitate independent research into the potential risk for discrimination in mortgage lending as well as better inform Congress and the public about this critical issue. A variety of options could mitigate costs associated with additional HMDA reporting, including limiting the reporting to larger lenders or restricting its use for regulatory purposes. While these alternatives would limit or restrict additional publicly available information on the potential risk for mortgage discrimination compared to a general data collection and reporting requirement, these are tradeoffs that merit consideration because additional data would facilitate the consistent, efficient, and effective oversight and enforcement of fair lending laws. The limited data available about potentially heightened risks for discrimination during the preapplication process also affects federal oversight of the fair lending laws for mortgage lending. Currently, enforcement agencies and depository institution regulators lack a direct and reliable source of data to help determine if lending officials may have engaged in discriminatory practices in their initial interactions with mortgage loan applicants. While researchers and consumer groups have conducted studies using testers that suggest that discrimination does take place during the preapplication process and federal officials generally agree that testers offer certain benefits, federal officials also have raised several concerns about their use. For example, they have questioned the costs of using testers and the reliability of data obtained in using them. Nevertheless, the lack of a reliable means to assess the potential risk for discrimination during the preapplication phase compromises depository institution regulators’ capacity to ensure lender compliance with the fair lending law in all phases of the mortgage lending process. In this regard, FDIC’s possible incorporation of testers into its examination process, depository institution regulators’ ongoing efforts to update the interagency fair lending examination guidance, or the Interagency Task Force on Fair Lending may offer opportunities to identify improved means of assessing discrimination in the preapplication phase. Moreover, the potential use of consumer surveys as suggested by the Department of the Treasury in its recent report on regulatory restructuring may represent another approach to assessing the potential risk for discrimination during the preapplication phase. Data limitations may have even more significant impacts on depository institution regulators’ and enforcement agencies’ capacity to assess fair lending risk in nonmortgage lending (such as small business, credit card, and automobile lending). Because Federal Reserve Regulation B generally prohibits lenders from collecting personal characteristic data for nonmortgage loans, agencies generally cannot target lenders for investigations or examinations as they can for mortgage loans. Consequently, federal agencies have limited tools to investigate potentially heightened risks of violations in types of lending that affect most U.S. consumers. While depository institution regulators and enforcement agencies have tried to develop ways to provide oversight in this area, the existing data limitations have affected the focus of oversight and enforcement efforts. While requiring lenders to collect and report personal characteristic data for nonmortgage loans as well as associated underwriting data as may be appropriate raises important cost and complexity concerns, the absence of such data represents a critical limitation in federal fair lending oversight efforts. There also are a number of larger challenges to fair lending oversight and enforcement stemming from the fragmented U.S. regulatory structure and other factors such as mission focus and resource constraints. Specifically, Independent lenders, which were the predominant originators of subprime and other questionable mortgages that often were made to minority borrowers in recent years, generally are subject to less comprehensive oversight than federally insured depository institutions and represent significant fair lending risks. In particular, enforcement agencies do not conduct investigations of many independent lenders that are identified as outliers through the Federal Reserve’s annual analysis of HMDA data to determine if these disparities represent fair lending law violations. The potential exists that additional instances of discrimination against borrowers could be taking place at such firms without being detected. Such limited oversight could undermine enforcement agencies’ efforts to deter violations. While depository institution regulators’ outlier examinations differ in important respects from enforcement agency investigations, depository institution regulators generally conduct examinations of all lenders identified as outliers to assess the potential risk for discrimination, which likely contributes to efforts at deterrence. Moreover, enforcement agencies, unlike most depository institution regulators, generally do not initiate fair lending investigations of independent lenders on a routine basis that are not viewed as outliers, which represents an important gap in fair lending oversight. The Federal Reserve lacks clear authority to assess fair lending compliance by nonbank subsidiaries of bank holding companies, which also have originated large numbers of subprime mortgages, in the same way that it oversees the activities of state-chartered depository institutions under its jurisdiction. The lack of clear authority to conduct routine consumer compliance examinations of nonbank subsidiaries is important because the Federal Reserve identifies many potential fair lending violations at state-chartered banks through such routine examinations. Without similar authority for nonbank subsidiaries, the Federal Reserve’s capacity to identify potential risks for fair lending violations is limited. Despite the joint interagency fair lending examination guidance and various coordination efforts, we also found that having multiple depository institution regulators resulted in variations in screening techniques, the management of the outlier examination process, examination documentation standards, and the number of referrals and types of examination findings. While differences in these areas may not be unexpected given the varied types of lenders under each depository institution regulator’s jurisdiction, these differences raise questions about the consistency and effectiveness of regulatory oversight. For example, the evidence suggests that lenders regulated by FDIC, the Federal Reserve, and OTS are more likely than lenders regulated by OCC and NCUA to be the subject of referrals to DOJ for being at potentially heightened risk of fair lending violations. Our current work did not fully evaluate the reasons and effects of identified differences and additional work in this area could help provide additional clarity. Finally, federal depository institution regulators and enforcement agencies also face some challenges associated with the 2-year statute of limitations under ECOA applicable to federal district court actions brought by DOJ. Because it takes about 6 months for the Federal Reserve to reconcile and review HMDA data, depository institution regulators and enforcement agencies typically review the HMDA data almost one year after the underlying loan decisions occurred, and may have a limited opportunity to conduct thorough examinations and investigations in some cases. While strategies may be available to manage the ECOA 2-year statute of limitations, such as obtaining tolling agreements, they are not always effective. Therefore, ECOA’s statute of limitations may work against the act’s general objective, which is to penalize and deter lending discrimination. To facilitate the capacity of federal enforcement agencies and depository institution regulators as well as independent researchers to identify lenders that may be engaged in discriminatory practices in violation of the fair lending laws, Congress should consider the merits of additional data collection and reporting options. These varying options pertain to obtaining key underwriting data for mortgage loans, such as credit scores as well as LTV and DTI ratios, and personal characteristic (such as race, ethnicity and sex) and relevant underwriting data for nonmortgage loans. To help ensure that all potential risks for fair lending violations are thoroughly investigated and sufficient time is available to do so, Congress should consider extending the statute of limitations on ECOA violations. As Congress debates the reform of the financial regulatory system, it also should take steps to help ensure that consumers are adequately protected, that laws such as the fair lending laws are comprehensive and consistently applied, and that oversight is efficient and effective. Any new structure should address gaps and inconsistencies in the oversight of independent mortgage brokers and nonbank subsidiaries, as well as address the potentially inconsistent oversight provided by depository institution regulators. To help strengthen fair lending oversight and enforcement, we recommend that DOJ, FDIC, Federal Reserve, FTC, HUD, NCUA, OCC, and OTS work collaboratively to identify approaches to better assess the potential risk for discrimination during the preapplication phase of mortgage lending. For example, the agencies and depository institution regulators could further consider the use of testers, perhaps on a pilot basis, as well as surveys of mortgage loan borrowers and applicants or alternative means to better assess the potential risk for discrimination during this critical phase of the mortgage lending process. We provided a draft of this report to the heads of HUD, FTC, DOJ, FDIC, the Federal Reserve, NCUA, OCC, and OTS. We received written comments from FTC, FDIC, NCUA, the Federal Reserve, OCC, and OTS, which are summarized below and reprinted in appendixes III through VIII. HUD provided its comments in an e-mail which is summarized below. DOJ did not provide written comments. All of the agencies and regulators, including DOJ, also provided technical comments, which we incorporated into the report where appropriate. We also provided excerpts of the draft report to two researchers whose studies we cited to help ensure the accuracy of our analysis. One of the researchers responded and said that the draft report accurately described his research, while the other did not respond. In the written comments provided by FDIC, the Federal Reserve, NCUA, OCC, and OTS, they agreed with our recommendation to work collaboratively regarding the potential use of testers or other means to better assess the risk of discriminatory practices during the premortgage loan application process, and generally described their fair lending oversight programs and, in some cases, planned enhancements to these programs. In particular, the Federal Reserve stated that it would be pleased to provide technical assistance to Congress regarding potential enhancements to HMDA data to better identify lenders at heightened risk of potential fair lending violations and described its existing approaches to fair lending oversight, including for the nonbank subsidiaries of bank holding companies. Further, the Federal Reserve stated that it is developing a framework for increased risk-based supervision for these entities. While such enhancements could strengthen the Federal Reserve’s oversight of nonbank subsidiaries, the lack of clear authority for it to conduct routine examinations continues to be an important limitation in fair lending oversight and enforcement. OCC also described its fair lending oversight program and planned revisions. First, OCC stated that it planned to enhance its procedures by formalizing headquarters involvement in the oversight process. For example, senior OCC headquarters officials will receive reports on at least a quarterly basis on scheduled, pending, and completed fair lending examinations to facilitate oversight of the examination process. Second, OCC plans to strengthen its fair lending examination documentation through, for example, changes in its centralized data systems so that the systems contain, in standardized form: relevant examination dates, the risk factors that were identified through the screening and other processes for each lender, the focal points of the examination, the reasons for any differences between the focal points and the areas identified through the risk screening processes, and the key findings of the examinations. OCC also noted that it (1) plans to expand its “HMDA-plus” pilot program to collect underwriting data from large banks at an earlier stage to facilitate screening efforts, (2) views working with other regulators to enhance the effectiveness and consistency of screening efforts as appropriate, and (3) will undertake work with other regulators and DOJ to address variations in referral practices. NCUA’s Chairman generally concurred with the draft report’s analysis and recommendations and offered additional information. First, the Chairman stated that additional study is needed to assess the depository institution regulators’ varying referral practices, but that such study should be conducted before drawing any conclusions about the effectiveness of NCUA’s fair lending oversight. The Chairman stated that NCUA has not made any referrals to DOJ because the agency did not identify any potential violations during the period covered by the report. Further, the Chairman stated NCUA uses the same examination procedures as the other depository institution regulators and offered reasons as to why violations may not exist at credit unions. For example, the Chairman said that credit unions have a specified mission of meeting the credit and savings needs of their members, especially persons of modest means (who typically are the target of discriminatory actions). We have not evaluated the Chairman’s analysis as to why fair lending violations may not exist at credit unions, but note that there is a potential for discrimination in any credit decision and that all federal agencies and regulators have a responsibility to identify and punish such violations as well as deter similar activity. The Chairman also (1) concurred that additional data collection under HMDA could enhance efforts to detect lenders at heightened risk of violations, but believes that such requirements should pertain to all lenders rather than a subset; (2) agreed that ECOA’s statute of limitations should be extended; and (3) concurred with the recommendation that NCUA work collaboratively with other regulators and agencies to better assess the potential for discrimination during the preapplication phase of mortgage lending. In an e-mail, HUD said that improved communication and cooperation among the federal agencies responsible for overseeing federal fair lending laws could improve federal compliance and enforcement efforts. HUD also concurred with the draft report’s analysis that expanding the range of data reported, by mortgage lenders pursuant to HMDA would significantly expand the department’s ability to identify new cases of potential lending discrimination. In particular, HUD stated that requiring lenders to report underwriting data, such as borrowers’ credit scores, would allow the department to more accurately assess lenders’ compliance with the Fair Housing Act. However, HUD urged careful consideration be paid to any proposal to limit the range of lenders subject to new reporting requirements under HMDA. HUD stated that, in its experience, smaller lenders, no less than larger lenders, may exhibit disparities in lending that warrant investigation for compliance with federal law. In addition, HUD stated that many smaller lenders may already collect and maintain for other business purposes the same data, which may be sought through expanded HMDA reporting requirements. FTC’s Director, Bureau of Consumer Protection, stated that the draft report appropriately drew attention to limitations in HMDA data as a means to identify lenders at heightened risk of fair lending violations. The Director also highlighted two conclusions in the draft report and noted that limitations of the data warranted collecting additional data before any conclusions about discrimination could be drawn. First, the Director stated that the report concluded that independent lenders have a heightened risk of potential violations compared to depository institutions. The Director said that many lenders make very few or no high-priced loans and thus cannot be evaluated by an analysis of HMDA pricing data whereas independent lenders disproportionately make such loans. Therefore, the Director said it is not possible to draw conclusions as to which types of lenders are more likely to have committed violations solely on the basis of HMDA data or the outlier lists and that such a conclusion about independent lenders is especially tenuous. The Director also stated that the report recommends that additional underwriting data be collected to supplement current mortgage data but does not address the importance of discretionary pricing data. The Director stated that lender discretion in granting or pricing credit represents a significant fair lending risk, and that the agency collects such information, in addition to underwriting information, as part of its investigations. In sum, the Director stated that while HMDA data is useful, additional data must be collected from lenders before any conclusions about discrimination can meaningfully be drawn. We have revised the draft to more fully reflect the Director’s views regarding limitations in HMDA data and its capacity to identify lenders at heightened risk of fair lending violations and draw conclusions about potential discrimination in mortgage lending. However, HMDA data may have limitations with respect to identifying mortgage pricing disparities as the Director noted. We do not concur that statements in the draft report suggesting that independent lenders may represent relatively heightened risks of fair lending violations are especially tenuous. As stated in the draft report, subprime loans and similar high cost mortgages, which are largely originated by independent lenders and nonbank subsidiaries of holding companies, appear to have been made to borrowers with limited or poor credit histories and subsequently resulted in significant foreclosure rates for such borrowers. Further, our 2007 report noted that subprime lending grew rapidly in areas with higher concentrations of minorities. While the scope or our work did not involve an analysis of the feasibility and costs of incorporating discretionary pricing data into HMDA data collection and reporting requirements, we acknowledge that the lack of such information may challenge oversight and enforcement efforts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from report date. At that time, we will send copies of this report to other interested congressional committees, and to the Chairman, Board of Governors of the Federal Reserve System; Chairman, Federal Deposit Insurance Corporation; Comptroller of the Currency, Office of the Comptroller of the Currency; Acting Director, Office of Thrift Supervision; Inspector General, the National Credit Union Administration; the Chairman of the Federal Trade Commission; the Secretary of the Department of Housing and Urban Development; the Attorney General; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. The objectives of our report were to (1) assess the strengths and limitations of data sources that enforcement agencies and depository institution regulators use to screen for lenders that have potentially heightened risk for fair lending law violations and discusses options for enhancing the data; (2) assess federal oversight of lenders that may represent relatively high risks of fair lending violations as evidenced by analysis of Home Mortgage Disclosure Act (HMDA) data and other information; (3) examine differences in depository institution regulators’ fair lending oversight programs; and (4) discuss enforcement agencies’ recent litigation involving potential fair lending law violations and challenges that federal officials have identified in fulfilling their enforcement responsibilities. To address the first objective for assessing the strengths and limitations of data to screen for lenders that appear to be at a heightened risk for potentially violating fair lending laws, we reviewed and analyzed fair lending examination and investigation guidance, policies, and procedures, and other agency documents. We gathered information on how enforcement agencies—the Department of Housing and Urban Development (HUD), the Federal Trade Commission (FTC), and the Department of Justice (DOJ)—and depository institution regulators—the Board of Governors of the Federal Reserve System (Federal Reserve), the Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), and the Office of Thrift Supervision (OTS)—use data sources such as HMDA data to screen for high-risk lenders. HMDA requires many mortgage lenders to collect and report data on mortgage applicants and borrowers. In 2004, HMDA was amended to require lenders to report certain mortgage loan pricing data. To assess the strengths and limitations of these data, we reviewed academic research, studies from consumer advocacy groups, Inspectors General reports, Congressional testimonies, and prior GAO work on the strengths and limitations of HMDA data and the limited availability of data for nonmortgage lending. We also reviewed available information on current initiatives to gather enhanced HMDA data (adding underwriting information such as loan-to-value ratios and credit scores) earlier in the screening and examination process, such as OCC’s pilot project. In addition, we interviewed officials from the enforcement agencies and depository institution regulators listed above—including senior officials, examiners, policy analysts, economists, statisticians, attorneys, and compliance specialists—to discuss how they use various data sources to screen for high-risk lenders, gather their perspectives on the strengths and limitations of available data sources, and obtain information on the costs of reporting HMDA data. We did not interview NCUA economists or attorneys and NCUA does not have statisticians. We did interview senior officials, examiners, policy analysts and compliance specialists. We also discussed current initiatives to address screening during the preapplication phase of lending, and the potential benefits and limitations of using testers during this phase. We evaluated the depository institution regulators’ examination guidance and approaches for the preapplication phase. We interviewed researchers, lenders, representatives from community and fair housing groups, and independent software vendors to gather perspectives on the strengths and limitations of HMDA data in the fair lending screening process and the benefits and costs of requiring the collection of additional or enhanced HMDA data. To address the second objective, we reviewed and analyzed enforcement agency and depository institution regulator documents. More specifically, we reviewed and analyzed internal fair lending examination and investigation guidance, policies, and procedures; federal statutes and information provided by the agencies on their authority, mission and jurisdiction; the Federal Reserve’s annual HMDA outlier lists; information on staffing resources; documentation on the number of fair lending enforcement actions initiated and settled; and other agency documents to compare and contrast the agencies’ and depository institution regulators’ authority and efforts to oversee the fair lending laws, including enforcement and investigative practices. We also obtained information on depository institution regulators’ outlier examination programs from internal agency documents and our file review of examinations of outlier institutions, as discussed below. Furthermore, we interviewed key agency officials from the eight enforcement agencies and depository institution regulators that oversee the fair lending laws to gather information on their regulatory and enforcement activities and compare their approaches. To gather information on state coordination of fair lending oversight with federal agencies, as well as to compare and contrast fair lending examination policies and practices, we also interviewed state banking regulatory officials and community groups. We also evaluated certain aspects of depository institution regulators’ compliance with fair lending outlier examination schedules and procedures. Specifically, we conducted a systematic review of 152 fair lending examination summary files derived from each depository institution regulator’s annual list of institutions identified to be at higher risk for fair lending violations (that is, their outlier lists). We examined outlier lists based on 2005 and 2006 HMDA data because they fully incorporated pricing data (first introduced in 2004 HMDA data), and because the examinations based on these lists had a higher likelihood of being completed. We systematically collected information and evaluated each examination’s compliance with key agency regulations and interagency and internal fair lending guidance. For instance, we reviewed the files to determine if outlier examinations had been initiated in a timely fashion; if examination scoping, focal points, and findings had been documented; and if recommendations were made to correct any deficiencies. We limited our focus to assessing regulatory compliance with applicable laws, regulations, and internal guidance and did not make judgments on how well agencies conducted the examinations. For three of the depository institution regulators—the Federal Reserve, FDIC, and OTS—we reviewed summary documentation (such as reports of examination, scope and methodology memorandums, exit and closing memorandums, and referral documentation to DOJ) of completed examinations for every institution on their 2005 and 2006 HMDA data outlier lists when relevant. This amounted to 32 examinations for the Federal Reserve, 38 for FDIC, and 40 for OTS. Because NCUA (1) does not have a centralized process for identifying outliers, (2) was unable to respond to our document request in a timely manner, and (3) had a relatively low number of credit unions identified as outliers by the Federal Reserve, we randomly selected and reviewed summary documentation for a sample of 10 examinations conducted in 2007 to capture examinations that analyzed loans made in 2005 and 2006 (out of 25 examinations). We also reviewed a random sample of national banks due to limitations in OCC’s fair lending examination documentation and the need to conduct our analysis in a timely manner. We randomly selected a simple sample of 27 examinations of institutions from a population of 231 institution examinations derived from OCC’s annual outlier lists for 2005 and 2006 HMDA data. Because OCC also randomly selects a sample of banks (both HMDA and non-HMDA filing) to receive comprehensive fair lending examinations, we also reviewed examination files from 2005 for five of these institutions (out of a population of 31). Thus, our sample totaled 32 lender examinations and we requested that OCC provide all fair lending oversight materials for each of these lenders from 2005 through 2008 so that we could discern the extent to which OCC was complying with regulations and guidance for its outlier examination program. We collected the same information for these examinations as from the other depository institution regulators. In addition, we reviewed guidance, policies, procedures, relevant statutes, and other documents from the Federal Reserve to assess the extent of fair lending oversight conducted for nonbank subsidiaries of bank holding companies. We also reviewed past GAO reports on the history of oversight of nonbank subsidiaries of bank and thrift holding companies. We interviewed agency officials and consumer advocacy groups to gather their perspectives on the extent of current oversight for nonbank subsidiaries of bank holding companies. We also spoke with agency officials to gather information on a current interagency pilot program between the Federal Reserve, OTS, FTC, and the Conference of State Bank Supervisors to monitor the activities of nonbank subsidiaries of holding companies. For the third objective, in addition to reviewing our analysis of depository institution regulators’ compliance with fair lending examination policies as described above, we (1) conducted further comparisons of their outlier examination screening processes, (2) reviewed documentation and reports related to their management of the outlier examination process and documentation and reporting of examination findings; and (3) reviewed documentation related to their referral practices and outlier examination findings. We also reviewed relevant federal internal control standards for documentation and reporting and compared them to the depository institution regulators’ practices as appropriate. We also discussed these issues with senior officials from the depository institution regulators and state financial regulatory officials from New York, Washington, and Massachusetts. In addition, we discussed their efforts to coordinate fair lending oversight programs through the development of interagency examination guidance and participation in meetings of the Interagency Task Force on Fair Lending and related forums. To address the fourth objective, we reviewed agencies’ internal policies, procedures, and guidance as well as federal statutory requirements that depository institution regulators use when making referrals or notifications to HUD or DOJ for potential violations of the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act. We also analyzed information on enforcement agencies’ and depository institution regulators’ staff resources and any time constraints they might face related to ECOA’s 2-year statute of limitations for making referrals to DOJ for follow-up investigations and potential enforcement actions. To obtain information on the enforcement activities of federal agencies, we conducted an analysis of the number of fair lending investigations initiated, complaints filed, and settlements reached by each enforcement agency. We also interviewed officials from each depository institution regulator and enforcement agency to gather information on investigative practices that enforcement agencies use when deciding whether to pursue a fair lending investigation or complaint against an institution and possible challenges that enforcement agencies and depository institution regulators face in enforcing the fair lending laws, specifically ECOA’s 2-year statute of limitations. For all the objectives, we interviewed representatives from financial institutions and several consumer advocacy groups and trade associations such as the Center for Responsible Lending, the National Community Reinvestment Coalition, and the National Fair Housing Alliance, Leadership Conference on Civil Rights, a large commercial bank, and Consumer Bankers’ Association. We obtained their perspectives on regulatory efforts to enforce fair lending laws, which include screening lenders for potential violations, conducting examinations, and enforcing the laws through referrals, investigations, or other means, and any collaborative activities between depository institution regulators and state entities. We conducted this performance audit from September 2008 to July 2009 in Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual name above, Wesley M. Phillips, Assistant Director; Benjamin Bolitzer; Angela Burriesci; Kimberly Cutright; Chris Forys; Simin Ho; Marc Molino, Carl Ramirez; Linda Rego; Barbara Roesmann; Jim Vitarello; and Denise Ziobro made major contributions to this report. Technical assistance was provided by Joyce Evans and Cynthia Taylor.
The Fair Housing Act (FHA) and the Equal Credit Opportunity Act (ECOA)--the "fair lending laws"--prohibit discrimination in lending. Responsibility for their oversight is shared among three enforcement agencies--the Department of Housing and Urban Development (HUD), Federal Trade Commission (FTC), and Department of Justice (DOJ)--and five depository institution regulators--the Federal Deposit Insurance Corporation (FDIC), Board of Governors of the Federal Reserve System (Federal Reserve), National Credit Union Administration (NCUA), Office of the Comptroller of the Currency (OCC), and Office of Thrift Supervision (OTS). This report examines (1) data used by agencies and the public to detect potential violations and options to enhance the data, (2) federal oversight of lenders that are identified as at heightened risk of violating the fair lending laws, and (3) recent cases involving fair lending laws and associated enforcement challenges. GAO analyzed fair lending laws, relevant research, and interviewed agency officials, lenders, and consumer groups. GAO also reviewed 152 depository institution fair lending examination files. Depending upon file availability by regulator, GAO reviewed all relevant files or a random sample as appropriate. The Home Mortgage Disclosure Act (HMDA) requires certain lenders to collect and publicly report data on the race, national origin, and sex of mortgage loan borrowers. Enforcement agencies and depository institution regulators use HMDA data to identify outliers--lenders that may have violated fair lending laws--and focus their investigations and examinations accordingly. But, HMDA data also have limitations; they do not include information on the credit risks of mortgage borrowers, which may limit regulators' and the public's capacity to identify lenders most likely to be engaged in discriminatory practices without first conducting labor-intensive reviews. Another data limitation is that lenders are not required to report data on the race, ethnicity, and sex of nonmortgage loan borrowers--such as small businesses, which limits oversight of such lending. While requiring lenders to report additional data would impose costs on them, particularly smaller institutions, options exist to mitigate such costs to some degree, such as limiting the reporting requirements to larger institutions. Without additional data, agencies' and regulators' capacity to identify potential lending discrimination is limited. GAO identified the following limitations in the consistency and effectiveness of fair lending oversight that are largely attributable to the fragmented U.S. financial regulatory system: (1) Federal oversight of lenders that may represent heightened risks of fair lending law violations is limited. For example, the enforcement agencies are responsible for monitoring independent mortgage lenders' compliance with the fair lending laws. Such lenders have been large originators of subprime mortgage loans in recent years and have more frequently been identified through analysis of HMDA data as outliers than depository institutions, such as banks. Depository institution regulators are more likely to assess the activities of outliers and, unlike enforcement agencies, they routinely assess the compliance of lenders that are not outliers. As a result, many fair lending violations at independent lenders may go undetected, and efforts to deter potential violations may be ineffective. (2) Although depository institution regulators' fair lending oversight efforts may be more comprehensive, the division of responsibility among multiple agencies raises questions about the consistency and effectiveness of their efforts. For example, each regulator uses a different approach to analyze HMDA data to identify outliers and examination documentation varies. Moreover, since 2005, OTS, the Federal Reserve, and FDIC have referred more than 100 lenders to DOJ for further investigations of potential fair lending violations, as required by ECOA, while OCC made one referral and NCUA none. Enforcement agencies have settled relatively few (eight) fair lending cases since 2005. Agencies identified several enforcement challenges, including the complexity of fair lending cases, difficulties in recruiting and retaining staff, and the constraints of ECOA's 2-year statute of limitations.
In modern warfare, military forces are heavily dependent upon access to the electromagnetic spectrum for successful operations. Communications with friendly forces and detection, identification, and targeting of enemy forces, among other tasks, are all reliant upon the ability to operate unhindered in the spectrum. For this reason, control of the electromagnetic spectrum is considered essential to carrying out military operations. Figure 1 illustrates the electromagnetic spectrum and some examples of military uses at various frequencies. For example, infrared or thermal imaging technology senses heat emitted by a person or an object and creates an image. Sensor systems utilize this technology to provide the advantage of seeing not only at night but also through smoke, fog, and other obscured battlefield conditions. DOD defines electronic warfare as any military action involving the use of electromagnetic and directed energy to control the electromagnetic spectrum or to attack the enemy. The purpose of electronic warfare is to secure and maintain freedom of action in the electromagnetic spectrum for friendly forces and to deny the same for the adversary. Traditionally, electronic warfare has been composed of three primary activities: Electronic attack: use of electromagnetic, directed energy, or antiradiation weapons to attack personnel, facilities, or equipment with the intent of degrading, neutralizing, or destroying enemy combat capability. Electronic attack can be used offensively, such as jamming enemy communications or jamming enemy radar to suppress its air defenses, and defensively, such as deploying flares. Electronic protection: actions to protect personnel, facilities, and equipment from any effects of friendly, neutral, or enemy use of the electromagnetic spectrum, as well as naturally occurring phenomena that degrade, neutralize, or destroy friendly combat capability. Electronic warfare support: actions directed by an operational commander to search for, intercept, identify, and locate sources of radiated electromagnetic energy for the purposes of immediate threat recognition, targeting, and planning; and conduct of future operations. Electronic warfare is employed to create decisive stand-alone effects or to support military operations, such as information operations and cyberspace operations. According to DOD, information operations are the integrated employment, during military operations, of information-related capabilities in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries while protecting our own. Information-related capabilities can include, among others, electronic warfare, computer network operations, military deception, operations security, and military information support operations (formerly psychological operations). Electronic warfare contributes to the success of information operations by using offensive and defensive tactics and techniques in a variety of combinations to shape, disrupt, and exploit adversarial use of the electromagnetic spectrum while protecting U.S. and allied freedom of action. Since cyberspace requires both wired and wireless links to transport information, both offensive and defensive cyberspace operations may require use of the electromagnetic spectrum. According to DOD, cyberspace operations are the employment of cyberspace capabilities where the primary purpose is to achieve military objectives or effects through cyberspace, which include computer network operations, among others. Computer network operations include computer network attack, computer network defense, and related computer network exploitation- enabling operations. Electronic warfare and cyberspace operations are complementary and have potentially synergistic effects. For example, an electronic warfare platform may be used to enable or deter access to a computer network. U.S. Strategic Command (Strategic Command) has been designated since 2008 as the advocate for joint electronic warfare. Strategic Command officials stated that, in the past, the primary office for electronic warfare expertise—the Joint Electronic Warfare Center—had several different names and was aligned under several different organizations, such as the Joint Forces Command and the U.S. Space Command. According to Strategic Command officials, in addition to the Joint Electronic Warfare Center, the command employs electronic warfare experts in its non-kinetic operations staff and in the Joint Electromagnetic Preparedness for Advanced Combat organization. According to Strategic Command officials, the Joint Electronic Warfare Center is the largest of the three organizations and employs approximately 60 military and civilian electronic warfare personnel and between 15 and 20 contractors. Strategic Command officials stated that the Joint Electronic Warfare Center was created as a DOD center of excellence for electronic warfare and has electronic warfare subject matter experts. The center provides planning and technical support not only to Strategic Command but to other combatant commands and organizations, such as U.S. Central Command, U.S. European Command, U.S. Pacific Command, and the Department of Homeland Security. The Joint Electronic Warfare Center also provides assistance with requirements generation to the military services. DOD developed an electronic warfare strategy, but only partially addressed key strategy characteristics identified as desirable in prior work by GAO. The National Defense Authorization Act for Fiscal Year 2010 requires the Secretary of Defense to submit to the congressional defense committees an annual report on DOD’s electronic warfare strategy for each of fiscal years 2011 through 2015. Each annual report is to be submitted at the same time the President submits the budget to Congress and is to contain, among other things, a description and overview of DOD’s electronic warfare strategy and the organizational structure assigned to oversee the development of the department’s electronic warfare strategy, requirements, capabilities, programs, and projects. In response to this legislative requirement, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics issued DOD’s 2011 and 2012 fiscal year strategy reports to Congress in October 2010 and November 2011, respectively. We previously reported that it is desirable for strategies to delineate six key characteristics, including organizational roles and responsibilities for implementing parties as well as performance measures to gauge results.responsible parties in further developing and implementing the strategy, enhance the strategy’s usefulness in resource and policy decisions, and better ensure accountability. The six characteristics are: (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) goals, subordinate objectives, activities, and performance measures; (4) resources, investments, and risk management; (5) organizational roles, responsibilities, and coordination; and (6) integration and implementation. The key characteristics of an effective strategy can aid As illustrated in Figure 3, we found that DOD’s reports addressed two key characteristics, but only partially addressed four other key characteristics of a strategy. For example, the strategy reports to Congress included elements of characteristics, such as a goal and objectives, but did not fully identify implementing parties, delineate roles and responsibilities for managing electronic warfare across the department, or identify outcome- related performance measures that could guide the implementation of electronic warfare efforts and help ensure accountability. Similarly, the reports provided acquisition program and research and development project data, but did not target resources and investments at some key activities associated with implementing the strategy. When investments are not tied to strategic goals and priorities, resources may not be used effectively and efficiently. Our past work has shown that such characteristics can help shape policies, programs, priorities, resource allocations, and standards in a manner that is conducive to achieving intended results. DOD’s fiscal year 2011 report is described here because the fiscal year 2012 report, issued in November 2011, is classified. However, unclassified portions of this document note that the fiscal year 2011 report remains valid as the base DOD strategy and that the fiscal year 2012 report updates its predecessor primarily to identify ongoing efforts to improve DOD’s electronic warfare capabilities and to provide greater specificity to current threats. The fiscal year 2011 Electronic Warfare Strategy of the Department of Defense report (electronic warfare strategy report)—the base electronic warfare strategy—addressed two and partially addressed four of six desirable characteristics of a strategy identified by GAO. There may be considerable variation in the extent to which the strategy addressed specific elements of those characteristics that were determined by GAO to be partially addressed. Our analysis of the fiscal year 2011 report’s characteristics is as follows. Purpose, scope and methodology: Addressed. The fiscal year 2011 electronic warfare strategy report identifies the purpose of the strategy, citing as its impetus section 1053 of the National Defense Authorization Act for Fiscal Year 2010, and articulates a maturing, twofold strategy focused on integrating electronic warfare capabilities into all phases and at all levels of military operations, as well as developing, maintaining, and protecting the maneuver space within the electromagnetic spectrum necessary to enable military capabilities. The report’s scope also encompasses data on acquisition programs and research and development projects. Additionally, the report includes some methodological information by citing a principle that guided its development. Specifically the report states that a key aspect of the strategy is the concept of the electromagnetic spectrum as maneuver space. Problem definition and risk assessment: Addressed. The fiscal year 2011 electronic warfare strategy report defines the problem the strategy intends to address, citing the challenges posed to U.S. forces by potential adversaries’ increasingly sophisticated technologies, the military’s increased dependence on the electromagnetic spectrum, and the urgent need to retain and expand remaining U.S. advantages. The report also assesses risk by identifying threats to, and vulnerabilities of critical operations, such as Airborne Electronic Attack and self-protection countermeasures. Goals, subordinate objectives, activities, and performance measures: Partially Addressed. The fiscal year 2011 electronic warfare strategy report communicates an overarching goal of enabling electromagnetic spectrum maneuverability and cites specific objectives, such as selectively denying an adversary’s use of the spectrum and preserving U.S. and allied forces’ ability to maneuver within the spectrum. The report also identifies key activities associated with the strategy, including developing (1) coherent electronic warfare organizational structures and leadership, (2) an enduring and sustainable approach to continuing education, and (3) capabilities to implement into electronic warfare systems. The report does not identify performance measures that could be used to gauge results and help ensure accountability. Resources, investments, and risk management: Partially Addressed. The fiscal year 2011 electronic warfare strategy report broadly targets resources and investments by emphasizing the importance of continued investment in electronic attack, electronic protection, and electronic support capabilities. The report also notes some of the associated risks in these areas, calling for new methods of ensuring U.S. control over the electromagnetic spectrum in light of the adversary’s advances in weapons and the decreasing effectiveness of traditional lines of defense, such as airborne electronic attack and self-protection countermeasures. The report identifies some of the costs associated with the strategy by providing acquisition program and research and development project and cost data, and notes that part of the strategy is to identify and track investments in electronic warfare systems, which often are obscured within the development of the larger weapons platforms they typically support. However, the strategy does not target investments by balancing risk against costs, or discuss other costs associated with implementing the strategy by, for example, targeting resources and investments at key activities, such as developing electronic warfare organizational structures and leadership and developing an enduring and sustainable approach to continuing education. Organizational roles, responsibilities, and coordination: Partially Addressed. The fiscal year 2011 electronic warfare strategy report provides an overview of past and ongoing electronic warfare activities within the military services and DOD, and identifies several mechanisms that have or could be used to foster coordination across the department. For example, it outlines the Army’s efforts to create a new career field for electronic warfare officers and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics’ electronic warfare integrated planning team. However, the report does not fully identify the departmental entities responsible for implementing the strategy, discuss the roles and responsibilities of implementing parties, or specify implementing entities’ relationships in terms of leading, supporting, and partnering. Integration and implementation: Partially Addressed. The fiscal year 2011 electronic warfare strategy report describes the department’s approach to ensuring maneuverability within the electromagnetic spectrum, thus supporting National Defense Strategy objectives that rely on use and control of the spectrum. The strategy’s overarching aim of ensuring electromagnetic spectrum maneuverability also is consistent with concepts contained in the department’s electromagnetic spectrum strategy documents—which collectively emphasize the importance of assured spectrum access.The strategy does not, however, discuss the department’s plans for implementing the strategy. DOD’s electronic warfare strategy reports were issued in response to the National Defense Authorization Act for Fiscal Year 2010 and were not specifically required to address all the characteristics we consider to be desirable for an effective strategy. Additionally, DOD’s fiscal year 2011 report states that the strategy is still maturing and that subsequent reports to Congress will refine the department’s vision. Nonetheless, we consider it useful for DOD’s electronic warfare strategy to address each of the characteristics we have identified in order to provide guidance to the entities responsible for implementing DOD’s strategy and to enhance the strategy’s utility in resource and policy decisions—particularly in light of the diffuse nature of DOD’s electronic warfare programs and activities, as well as the range of emerging technical, conceptual, and organizational challenges and changes in this area. Further, in the absence of clearly defined roles and responsibilities, and other elements of key characteristics, such as measures of performance in meeting goals and objectives, entities responsible for implementing DOD’s strategy may lack the guidance necessary to establish priorities and milestones, thereby impeding their ability to achieve intended results within a reasonable time frame. As a result, DOD lacks assurance that its electronic warfare programs and activities are aligned with strategic priorities and are managed effectively. For example, without an effective strategy, DOD is limited in its ability to reduce the potential for unnecessary overlap in the airborne electronic attack acquisition activities on which we have previously reported. DOD has taken some steps to address a critical leadership gap identified in 2009, but it has not established a departmentwide governance framework for planning, directing, and controlling electronic warfare activities. DOD is establishing a Joint Electromagnetic Spectrum Control Center (JEMSCC) under Strategic Command in response to the leadership gap for electronic warfare. However, DOD has not documented the objectives or implementation tasks and timeline for the JEMSCC. In addition, DOD has not updated key guidance to reflect recent policy changes regarding electronic warfare management and oversight roles and responsibilities. For example, it is unclear what the JEMSCC’s role is in relation to other DOD organizations involved in the management of electronic warfare, such as the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. Moreover, we found that DOD may face challenges in its oversight of electronic warfare as a result of the evolving relationship between electronic warfare and cyberspace operations. DOD has taken some steps to address a critical leadership gap by establishing the JEMSCC under Strategic Command. However, because DOD has yet to define specific objectives for the center, outline major implementation tasks, and define metrics and timelines to measure progress, it is unclear to what extent the center will address the identified existing leadership deficiencies. The Center for Strategic and International Studies reported insufficient leadership as the most critical among 34 capability gaps affecting electronic warfare. As a result of the absence of leadership, the department was significantly impeded from both identifying departmentwide needs and solutions and eliminating potentially unnecessary overlap among the military services’ electronic warfare acquisitions. Specifically, the department lacked a joint leader and advocate with the authority to integrate and influence electronic warfare capabilities development, to coordinate internal activities, and to represent those activities and interests to outside organizations. Mitigating the leadership gap was identified not only as the highest priority, but also a prerequisite to addressing the other 33 gaps. The Center for Strategic and International Studies report was one of two parallel studies commissioned by the Joint Requirements Oversight Council to assess potential organizational and management solutions to the leadership gap. These studies considered a number of options, including an organization under the Deputy Secretary of Defense, an activity controlled by the Chairman of the Joint Chiefs of Staff, and an organization at Strategic Command. As a result of these studies, in January 2011, DOD initiated efforts to establish the JEMSCC under Strategic Command as the focal point of joint electronic warfare advocacy. This solution was chosen, in part, in recognition of Strategic Command’s resident electronic warfare expertise as well as its already assigned role as an electronic warfare advocate. In January 2011, the Joint Requirements Oversight Council directed Strategic Command to develop an implementation plan for the electronic warfare center to be submitted for council approval no later than May 2011. The plan was to delineate (1) the center’s mission, roles, and responsibilities; (2) command and control, reporting, and support relationships with combatant commands, military services, and U.S. Government departments and agencies; and (3) minimum requirements to achieve initial operational capability and full operational capability. The Joint Requirements Oversight Council subsequently approved an extension of the center’s implementation plan submission to August 2011. Subsequently, in December 2011, the oversight council issued a memorandum that closed the requirement to submit an implementation plan to the council and stated that Strategic Command had conducted an internal reorganization and developed a center to perform the functions identified in the internal DOD study. In December 2011, Strategic Command issued an operations order that defined the JEMSCC as the primary focal point for electronic warfare, supporting DOD advocacy for joint electronic warfare capability requirements, resources, strategy, doctrine, planning, training, and operational support. This order provided 22 activities that the center is to perform. Federal internal control standards require that organizations establish objectives and clearly define key areas of authority and responsibility. In addition, best practices for strategic planning have shown that effective and efficient operations require detailed plans outlining major implementation tasks and defined metrics and timelines to measure progress. Moreover, the independent study prepared for DOD similarly emphasized the importance of clearly defining the center’s authorities and responsibilities, noting that the center’s success would hinge, in part, on specifying how it is expected to relate to the department as a whole as well as its expected organizational outcomes. However, as of March 2012, Strategic Command had not issued an implementation plan or other documentation that defines the center’s objectives and outlines major implementation tasks, metrics, and timelines to measure progress. Strategic Command officials told us in February 2012 that an implementation plan had been drafted, but that there were no timelines for the completion of the implementation plan or a projection for when the center would reach its full operational capability. As a result, it remains unclear whether or when the JEMSCC will provide effective departmentwide leadership and advocacy for electronic warfare, and influence resource decisions related to capability development. According to officials from Strategic Command, the JEMSCC will consist of staff from Strategic Command’s Joint Electronic Warfare Center at Lackland Air Force Base, Texas, and the Joint Electromagnetic Preparedness for Advanced Combat organization, at Nellis Air Force Base, Nevada. These officials stated that while each of JEMSCC’s component groups’ missions will likely evolve as the center matures, the JEMSCC components would continue prior support activities, such as the Joint Electronic Warfare Center’s support to other combatant commands through its Electronic Warfare Planning and Coordination Cell—a rapid deployment team that provides electronic warfare expertise and support to build electronic warfare capacity. Figure 4 depicts the JEMSCC’s organizational construct. DOD has yet to define objectives and issue an implementation plan for the JEMSCC; however, officials from Strategic Command stated that they anticipated continuity between the command’s previous role as an electronic warfare advocate and its new leadership role, noting that advocacy was, and remains, necessary because electronic warfare capabilities are sometimes undervalued in comparison to other, kinetic capabilities.Command’s previously assigned advocacy role, in part, by continuing to advocate for electronic warfare via the Joint Capabilities Integration and Development System process—DOD’s process for identifying and developing capabilities needed by combatant commanders—and by For example, the JEMSCC will likely build off Strategic providing electronic warfare expertise. Specifically, Strategic Command officials stated that the JEMSCC, through Strategic Command, would likely provide input to the development of joint electronic warfare requirements during the joint capabilities development process. However, combatant commands, such as Strategic Command, provide one of many inputs to this process. Further, as we have previously reported, council decisions, while influential, are advisory to acquisition and budget processes driven by military service investment priorities.the JEMSCC’s ability to affect resource decisions via this process is likely to be limited. Officials we spoke with across DOD, including those from the military services and Strategic Command, recognized this challenge. Specifically, Strategic Command officials told us that for JEMSCC to influence service- level resource decisions and advocate effectively for joint electronic warfare capabilities, the JEMSCC would need to not only participate in the joint capabilities development process, but would also need authorities beyond those provided by the Unified Command Plan, such as the authority to negotiate with the military services regarding resource decisions. Similarly, we found that while the officials we spoke with from several DOD offices that manage electronic warfare, including offices within the military services, were unaware of the center’s operational status and unclear regarding its mission, roles, and responsibilities, many also thought it to be unlikely that the JEMSCC—as a subordinate center of Strategic Command—would possess the requisite authority to advocate effectively for electronic warfare resource decisions. These concerns were echoed by the independent study, which noted that the center would require strong authorities to substantially influence the allocation of other DOD elements’ resources. Additionally, limited visibility across the department’s electronic warfare programs and activities may impede the center’s ability to advocate for electronic warfare capabilities development. Specifically, Strategic Command officials told us that they do not have access to information regarding all of the military services’ electronic warfare programs and activities, particularly those that are highly classified or otherwise have special access restrictions. In addition, Strategic Command officials told us that they do not have visibility over or participate in rapid acquisitions conducted through the joint capabilities development process. In our March 2012 report on DOD’s airborne electronic attack strategy and acquisitions, we reported that certain airborne electronic attack systems in development may offer capabilities that unnecessarily overlap with one another—a condition that appears most prevalent with irregular warfare systems that the services are acquiring under DOD’s rapid acquisitions The JEMSCC’s exclusion from this process is likely to limit its process. ability to develop the departmentwide perspective necessary for effective advocacy. Moreover, in the absence of clearly defined objectives and an implementation plan outlining major implementation tasks and timelines to measure progress, these potential challenges reduce DOD’s level of assurance that the JEMSCC will provide effective departmentwide leadership for electronic warfare capabilities development. GAO-12-175 and GAO-12-342SP. assess and evaluate its internal control to assure that the actions in place are effective and updated when necessary. DOD’s two primary directives that provide some guidance for departmentwide oversight of electronic warfare are: DOD Directive 3222.4 (Electronic Warfare and Command and Control Warfare Countermeasures)—Designates the Under Secretary of Defense for Acquisition (now Acquisition, Technology, and Logistics) as the focal point for electronic warfare within the department. However, the directive was issued in 1992 and updated in 1994, and does not reflect subsequent changes in policy or organizational structures. For example, the directive does not reflect the establishment of the JEMSCC under Strategic Command. DOD Directive 3600.01 (Information Operations)—Issued in 2006 and revised in May 2011, this directive provides the department with a framework for oversight of information operations, which was defined as the integrated employment of the core capabilities of electronic warfare, computer network operations, military information support operations (formerly referred to as psychological operations), military deception, and operations to influence, disrupt, corrupt, or usurp adversarial human and automated decision making while protecting that of the United States. However, the definition of oversight responsibilities for information operations has changed, and these changes have not yet been reflected in DOD Directive 3600.01. DOD Directive 3222.4 has not been updated to reflect the responsibilities for electronic warfare assigned to Strategic Command. Both the December 2008 and April 2011 versions of the Unified Command Plan assigned Strategic Command responsibility for advocating for joint electronic warfare capabilities. Similarly, the directive has not been updated to reflect the establishment of the JEMSCC and its associated electronic warfare responsibilities. Specifically, the directive does not acknowledge that JEMSCC has been tasked by Strategic Command as the primary focal point for electronic warfare; rather, the directive designates the Under Secretary of Defense for Acquisition, Technology, and Logistics as the focal point for electronic warfare within DOD. As a result, it is unclear what JEMSCC’s roles and responsibilities are in relation to those of the Under Secretary of Defense for Acquisition, Technology, and Logistics. For example, it’s unclear what JEMSCC’s role will be regarding development of future iterations of the DOD’s electronic warfare strategy report to Congress, which is currently produced by the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. Also it is unclear what role, if any, the JEMSCC will have in prioritizing electronic warfare investments. Moreover, the directive has not been updated to reflect the Secretary of Defense’s memorandum issued in January 2011, which assigned individual capability responsibility for electronic warfare and computer network operations to Strategic Command. DOD Directive 3600.01 provides both the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Under Secretary of Defense for Intelligence with responsibilities that aid in the oversight of electronic warfare within DOD. However, pursuant to the Defense Secretary’s January 2011 memo, the directive is under revision to accommodate changes in roles and responsibilities. Under the current version of DOD Directive 3600.01, the Under Secretary of Defense for Intelligence is charged with the role of Principal Staff Advisor to the Secretary of Defense for information operations. The Principal Staff Advisor is responsible for, among other things, the development and oversight of information operations policy and integration activities as well as the coordination, oversight, and assessment of the efforts of DOD components to plan, program, develop, and execute capabilities in support of information operations requirements. Additionally, the current Directive 3600.01 identifies the Under Secretary of Defense for Acquisition, Technology, and Logistics as responsible for establishing specific policies for the development of electronic warfare as a core capability of information operations. Under the requirements of DOD acquisition policy, the Under Secretary of Defense for Acquisition, Technology, and Logistics regularly collects cost, schedule, and performance data for major programs.the cost information of electronic warfare systems are reported as distinct programs, while in other cases, some electronic warfare systems are subcomponents of larger programs, and cost information is not regularly collected for these separate subsystems. Additionally, the Under Secretary—in coordination with the Army, the Navy, and the Air Force—is developing an implementation road map for electronic warfare science and technology. The road map is supposed to coordinate investments across DOD to accelerate the development and delivery of capabilities. The road map is expected to be completed in late summer of 2012. The Secretary of Defense issued a memorandum in January 2011 that prompted DOD officials to begin revising DOD Directive 3600.01. The memorandum redefined information operations as “the integrated employment, during military operations, of information-related capabilities in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries while protecting our own.” Previously, DOD defined information operations as the “integrated employment of the core capabilities of electronic warfare, computer network operations, psychological operations, military deception, and operations security, in concert with specified supporting and related capabilities, to influence, disrupt, corrupt, or usurp adversarial human and automated decision making while protecting our own.” According to DOD officials, the revised definition removed the term core capabilities because it put too much emphasis on the individual core capabilities and too little emphasis on the integration of these capabilities. Additionally, the memorandum noted that the Under Secretary of Defense for Policy began serving as the Principal Staff Advisor for information operations as of October 1, 2010, and charged the Under Secretary of Defense for Policy with revising DOD Directive 3600.01 to reflect these responsibilities. According to the memorandum, the Principal Staff Advisor is to serve as the single point of fiscal and program accountability for information operations. However, according to DOD officials, this accountability oversight covers only the integration of information operations-related capabilities and does not cover the formerly defined core capabilities of information operations, including electronic warfare and computer network operations. For example, DOD officials stated that the Principal Staff Advisor for information operations would maintain program accountability where information operations-related capabilities were integrated but would not maintain program accountability for all information-related capabilities. However, the memorandum does not clearly describe the specific responsibilities of the Principal Staff Advisor for information operations. The Secretary’s memorandum directed the Under Secretary of Defense for Policy, together with the Undersecretary of Defense (Comptroller) and Director of Cost Analysis and Program Evaluation, to continue to work to develop standardized budget methodologies for information operations- related capabilities and activities. However, these budget methodologies would capture only data related to information operations. For example, according to Under Secretary of Defense for Policy officials, they do not collect or review electronic warfare financial data, but may review this data in the future to determine if it relates to integrated information operations efforts. Officials from the Office of the Under Secretary of Defense for Policy stated that DOD Directive 3600.01 was under revision to reflect these and other changes as directed by the Secretary’s memorandum. Until the underlying directive is revised, there may be uncertainty regarding which office has the authority to manage and oversee which programs. Moreover, until this directive is updated, it is not clear where the boundaries are for oversight of electronic warfare between the Under Secretary of Defense for Policy and the Under Secretary of Defense for Acquisition, Technology, and Logistics. Table 1 compares the oversight roles and responsibilities for electronic warfare as described in the two DOD directives and the Secretary’s 2011 policy memorandum. DOD may face challenges in its oversight of electronic warfare because of the evolving relationship between electronic warfare and cyberspace operations, specifically computer network operations; both are information operations-related capabilities. According to DOD, to ensure all aspects of electronic warfare can be developed and integrated to achieve electromagnetic spectrum control, electronic warfare must be clearly and distinctly defined in its relationship to information operations (to include computer network operations) and the emerging domain of cyberspace. In the previous section, we noted that DOD’s directives do not clearly define the roles and responsibilities for the oversight of electronic warfare in relation to the roles and responsibilities for information operations. The current DOD Directive 3600.01 does not clearly specify what responsibilities the Principal Staff Advisor has regarding the integration of information operations-related capabilities—specifically the integration of electronic warfare capabilities with computer network operations. Further, DOD’s fiscal year 2011 electronic warfare strategy report to Congress, which delineated its electronic warfare strategy, stated that the strategy has two, often co-dependent capabilities: traditional electronic warfare and computer network attack, which is part of cyberspace operations. Moreover, according to DOD officials, the relationship between electronic warfare and cyberspace operations—including computer network attack—is still evolving, which is creating both new opportunities and challenges. There will be operations and capabilities that blur the lines between cyberspace operations and electronic warfare because of the continued expansion of wireless networking and the integration of computers and radio frequency communications. According to cognizant DOD officials, electronic warfare capabilities may permit use of the electromagnetic spectrum as a maneuver space for cyberspace operations. For example, electronic warfare capabilities may serve as a means of accessing otherwise inaccessible networks to conduct cyberspace operations; presenting new opportunities for offensive action as well as the need for defensive preparations. Current DOD doctrine partially describes the relationship between electronic warfare and cyberspace operations. Specifically, current joint doctrine for electronic warfare, which was last updated in February 2012, states that since cyberspace requires both wired and wireless links to transport information, both offensive and defensive cyberspace operations may require use of the electromagnetic spectrum for the enabling of effects in cyberspace. Due to the complementary nature and potential synergistic effects of electronic warfare and computer network operations, they must be coordinated to ensure they are applied to maximize effectiveness. When wired access to a computer system is limited, electromagnetic access may be able to successfully penetrate the computer system. For example, use of an airborne weapons system to deliver malicious code into cyberspace via a wireless connection would be characterized as “electronic warfare-delivered computer network attack.” In addition, the doctrine mentions that electronic warfare applications in support of homeland defense are critical to deter, detect, prevent, and defeat external threats such as cyberspace threats. DOD has not yet published specific joint doctrine for cyberspace operations, as we previously reported. things, that DOD establish a time frame for deciding whether to proceed with a dedicated joint doctrine publication on cyberspace operations and update existing cyber-related joint doctrine. DOD agreed and has drafted, but not yet issued, the joint doctrine for cyberspace operations. According to U.S. Cyber Command officials, it is unclear when the doctrine for cyberspace operations will be issued. See GAO, Defense Department Cyber Efforts, DOD Faces Challenges in Its Cyber Activities, GAO-11-75 (Washington, D.C.: July 25, 2011). proliferation of information and communications technology. According to a Navy official, the Navy recognizes the evolving relationship between electronic warfare and cyberspace operations and is moving toward defining that relationship. However, the Navy first is working to define the relationship between electronic warfare and electromagnetic spectrum operations. In addition, Air Force Instruction 10-706, Electronic Warfare Operations, states that traditional electronic warfare capabilities are beginning to overlap with cyberspace areas, which is resulting in an increased number of emerging targets such as non-military leadership networks and positioning, navigation, and timing networks. According to U.S. Cyber Command officials, it is important to understand how electronic warfare and cyberspace operations capabilities might be used in an operational setting. Such information could then inform the further development of doctrine. U.S. Cyber Command officials stated that they have participated in regular meetings with representatives from the military services, the National Security Agency, defense research laboratories, and others, to discuss the relationship of electronic warfare and cyberspace operations. Moreover, the Under Secretary for Acquisition, Technology, and Logistics, has established steering committees that are developing road maps for the Secretary of Defense’s seven designated science and technology priority areas—one of which is cyberspace operations and another is electronic warfare. DOD faces significant challenges in operating in an increasingly complex electromagnetic environment. Therefore, it is important that DOD develop a comprehensive strategy to ensure departmental components are able to integrate electronic warfare capabilities into all phases of military operations and maintain electromagnetic spectrum access and maneuverability. DOD would benefit from a strategy that includes implementing parties, roles, responsibilities, and performance measures, which can help ensure that entities are effectively supporting such objectives, and linking resources and investments to key activities necessary to meet strategic goals and priorities. In the absence of a strategy that fully addresses these and other key elements, the DOD components and military services responsible for implementing this strategy, evaluating progress, and ensuring accountability may lack the guidance necessary to prioritize their activities and establish milestones that are necessary to achieve intended results within a reasonable time frame. Moreover, as a result, DOD may not be effectively managing its electronic warfare programs and activities or using its resources efficiently. For example, an effective strategy could help DOD reduce the potential for unnecessary overlap in the airborne electronic attack acquisition activities on which we have previously reported. The military’s increasing reliance on the electromagnetic spectrum— coupled with a fiscally constrained environment and critical gaps in electronic warfare management—highlights the need for an effective governance framework for managing and conducting oversight of the department’s electronic warfare activities. The absence of such a framework can exacerbate management challenges, including those related to developing and implementing an effective strategy and coordinating activities among stakeholders. Without additional steps to define the purpose and activities of the JEMSCC, DOD lacks reasonable assurance that this center will provide effective departmentwide leadership for electronic warfare capabilities development and ensure the effective and efficient use of its resources. As we previously reported, DOD acknowledges a leadership void that makes it difficult to ascertain whether the current level of investment is optimally matched with the existing capability gaps. Leveraging resources and acquisition efforts across DOD—not just by sharing information, but through shared partnerships and investments—can simplify developmental efforts, improve interoperability among systems and combat forces, and could decrease future operating and support costs. Such successful outcomes can position the department to maximize the returns it gets on its electronic warfare investments. In addition, multiple organizations are involved with electronic warfare and outdated guidance regarding management and oversight may limit the effectiveness of their activities. Both the Under Secretary of Defense for Acquisition, Technology, and Logistics and the JEMSCC have been identified as the focal point for electronic warfare within the department, yet it is unclear what each organization’s roles and responsibilities are in relation to one another. Further, each organization’s management responsibilities related to future iterations of the electronic warfare strategy report to Congress and working with the military services to prioritize investments remain unclear. Updating electronic warfare directives and policy documents to clearly define oversight roles and responsibilities for electronic warfare— including any roles and responsibilities related to managing the relationship between electronic warfare and information operations or electronic warfare and cyberspace operations, specifically computer network operations—would help ensure that all aspects of electronic warfare can be developed and integrated to achieve electromagnetic spectrum control. To improve DOD’s management, oversight, and coordination of electronic warfare policy and programs, we recommend that the Secretary of Defense take the following three actions: Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in coordination with the Under Secretary of Defense for Policy and Strategic Command, and others, as appropriate, to include at a minimum the following information in the fiscal years 2013 through 2015 strategy reports for electronic warfare: Performance measures to guide implementation of the strategy and help ensure accountability. These could include milestones to track progress toward closing the 34 capability gaps identified by DOD studies. Resources and investments necessary to implement the strategy, including those related to key activities, such as developing electronic warfare organizational structures and leadership. The parties responsible for implementing the department’s strategy, including specific roles and responsibilities. Direct the Commander of Strategic Command to define the objectives of the Joint Electromagnetic Spectrum Control Center and issue an implementation plan outlining major implementation tasks and timelines to measure progress. Direct the Under Secretary of Defense for Policy, in concert with the Under Secretary of Defense for Acquisition, Technology, and Logistics, as appropriate, to update key departmental guidance regarding electronic warfare—including DOD Directives 3222.4 (Electronic Warfare and Command and Control Warfare Countermeasures) and 3600.01 (Information Operations)—to clearly define oversight roles and responsibilities of and coordination among the Under Secretary of Defense for Policy; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Joint Electromagnetic Spectrum Control Center. Additionally, the directives should clarify, as appropriate, the oversight roles and responsibilities for the integration of electronic warfare and cyberspace operations, specifically computer network operations. In written comments on a draft of this report, DOD partially concurred with our first recommendation and concurred with our other two recommendations. Regarding our recommendation that DOD include in future strategy reports for electronic warfare, at a minimum, information on (1) performance measures to guide implementation of the strategy, (2) resources and investments necessary to implement the strategy, and (3) parties responsible for implementing the strategy, the department stated that it continues to refine the annual strategy reports for electronic warfare and will expand upon resourcing plans and organization roles; however, the department stated that the strategy was not intended to be prescriptive with performance measures. As we have previously stated, the inclusion of performance measures can aid entities responsible for implementing DOD’s electronic warfare strategy in establishing priorities and milestones to aid in achieving intended results within reasonable time frames. We also have noted that performance measures can enable more effective oversight and accountability as progress toward meeting a strategy’s goals may be measured, thus helping to ensure the strategy’s successful implementation. We therefore continue to believe this recommendation has merit. DOD concurred with our remaining two recommendations that (1) the Commander of Strategic Command define the objectives of the JEMSCC and issue an implementation plan for the center and (2) DOD update key departmental guidance regarding electronic warfare. These steps, if implemented, will help to clarify the roles and responsibilities of electronic warfare management within the department and aid in the efficient and effective use of resources. DOD’s written comments are reprinted in their entirety appendix III. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; and the Commander, U.S. Strategic Command. In addition, this report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To assess the extent to which DOD has developed a strategy to manage electronic warfare we evaluated DOD’s fiscal year 2011 and 2012 electronic warfare strategy reports to Congress against prior GAO work on strategic planning, that indentified six desirable characteristics of a strategy. The characteristics GAO previously identified are: (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) goals, subordinate objectives, activities, and performance measures; (4) resources, investments, and risk management; (5) organizational roles, responsibilities, and coordination; and (6) integration and implementation. While these characteristics were identified in our past work as desirable components of national-level strategies, we determined that they also are relevant to strategies of varying scopes, including defense strategies involving complex issues. For example, identifying organizational roles, responsibilities, and coordination mechanisms is key to allocating authority and responsibility for implementing a strategy. Further, goals, objectives, and performance measures provide concrete guidance for implementing a strategy, allowing implementing parties to establish priorities and milestones, and providing them with the flexibility necessary to pursue and achieve those results within a reasonable time frame. Full descriptions of these characteristics are contained in appendix II. We determined that the strategy “addressed” a characteristic when it explicitly cited all elements of a characteristic, even if it lacked specificity and details and could thus be improved upon. The strategy “partially addressed” a characteristic when it explicitly cited some, but not all, elements of a characteristic. Within our designation of “partially addressed,” there may be wide variation between a characteristic for which most of the elements were addressed and a characteristic for which few of the elements of the characteristic were addressed. The strategy “did not address” a characteristic when it did not explicitly cite or discuss any elements of a characteristic, and/or any implicit references were either too vague or general. To supplement this analysis and gain further insight into issues of strategic import, we also reviewed other relevant strategic planning documents—such DOD’s National Defense Strategy, Strategic Spectrum Plan, and Net-Centric Spectrum Management Strategy—and interviewed cognizant officials from organizations across the department, including the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; U.S. Strategic Command; and the Joint Chiefs of Staff. DOD Directive 3222.4, Electronic Warfare and Command and Control Warfare Countermeasures (Washington, D.C.: July 31, 1992, Incorporating Change 2, Jan. 28, 1994). fiscal year 2011 and 2012 electronic warfare strategy reports to Congress; and classified and unclassified briefings, and studies related to DOD’s identification of and efforts to address electronic warfare capability gaps, including DOD’s 2009 Electronic Warfare Initial Capabilities Document. We also reviewed DOD and military service reports, plans, concepts of operation, and outside studies that discuss DOD’s definitions of electronic warfare and cyberspace operations. In addition, we interviewed cognizant DOD officials to obtain information and perspectives regarding policy, management, and technical issues related to electronic warfare, information operations, electromagnetic spectrum control, and cyberspace operations. In addressing both of our objectives, we obtained relevant documentation from and/or interviewed officials from the following DOD offices, combatant commands, military services, and combat support agencies: Office of the Under Secretary of Defense for Policy Office of the Under Secretary of Defense for Intelligence Office of the Under Secretary of Defense for Acquisition, Technology, Office of the Assistant Secretary of Defense for Networks and Integration/DOD Chief Information Officer Joint Chiefs of Staff U.S. Cyber Command, Fort Meade, Maryland U.S. Pacific Command, Camp H.M. Smith, Hawaii U.S. Strategic Command, Offutt Air Force Base, Nebraska Joint Electromagnetic Spectrum Control Center, Offutt Air Force Base, Nebraska Joint Electronic Warfare Center, Lackland Air Force Base, Texas Office of the Deputy Chief of Staff of the Army for Operations, Plans, and Training, Electronic Warfare Division Training and Doctrine Command, Combined Arms Center Electronic Warfare Proponent Office, Fort Leavenworth, Kansas U.S. Air Force—Electronic Warfare Division U.S. Marines Corps—Headquarters, Electronic Warfare Branch U.S. Navy Office of the Deputy Chief of Naval Operations for Information Dominance Electronic and Cyber Warfare Division Naval Sea Systems Command, Naval Surface Warfare Center, Naval Sea Systems Command, Program Executive Office for Navy Fleet Forces Cyber Command, Fleet Electronic Warfare Center, Joint Expeditionary Base Little Creek-Fort Story, Virginia Defense Information Systems Agency—Defense Spectrum National Security Agency, Fort Meade, Maryland We conducted this performance audit from July 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We previously identified a set of desirable strategy characteristics to aid responsible parties in implementation, enhance the strategies’ usefulness in resource and policy decisions, and to better ensure accountability.Table 2 provides a brief description of each characteristic and its benefit. In addition to the contact named above, key contributors to this report were Davi M. D’Agostino, Director (retired); Mark A. Pross, Assistant Director; Carolynn Cavanaugh; Ryan D’Amore; Brent Helt; and Richard Powelson. Airborne Electronic Attack: Achieving Mission Objectives Depends on Overcoming Acquisition Challenges. GAO-12-175. Washington, D.C.: March 29, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Defense Department Cyber Efforts: Definitions, Focal Point, and Methodology Needed for DOD to Develop Full-Spectrum Cyberspace Budget Estimates. GAO-11-695R. Washington, D.C.: July 29, 2011. Defense Department Cyber Efforts: DOD Faces Challenges in Its Cyber Activities. GAO-11-75. Washington, D.C.: July 25, 2011. Defense Department Cyber Efforts: More Detailed Guidance Needed to Ensure Military Services Develop Appropriate Cyberspace Capabilities. GAO-11-421. Washington, D.C.: May 20, 2011. Defense Management: Perspectives on the Involvement of the Combatant Commands in the Development of Joint Requirements. GAO-11-527R. Washington, D.C.: May 20, 2011. Electronic Warfare: Option of Upgrading Additional EA-6Bs Could Reduce Risk in Development of EA-18G. GAO-06-446. Washington, D.C.: April 26, 2006. Electronic Warfare: Comprehensive Strategy Still Needed For Suppressing Enemy Air Defenses. GAO-03-51. Washington, D.C.: November 25, 2002. Electronic Warfare: The Army Can Reduce Its Risk in Developing New Radar Countermeasures System. GAO-01-448. Washington, D.C.: April 30, 2001.
DOD has committed billions of dollars to developing, maintaining, and employing warfighting capabilities that rely on access to the electromagnetic spectrum. According to DOD, electronic warfare capabilities play a critical and potentially growing role in ensuring the U.S. military’s access to and use of the electromagnetic spectrum. GAO was asked to assess the extent to which DOD (1) developed a strategy to manage electronic warfare and (2) planned, organized, and implemented an effective governance structure to oversee its electronic warfare policy and programs and their relationship to cyberspace operations. GAO analyzed policies, plans, and studies related to electronic warfare and cyberspace operations and interviewed cognizant DOD officials. The Department of Defense (DOD) developed an electronic warfare strategy, but it only partially addressed key characteristics that GAO identified in prior work as desirable for a national or defense strategy. The National Defense Authorization Act for Fiscal Year 2010 requires DOD to submit to the congressional defense committees an annual report on DOD’s electronic warfare strategy for each of fiscal years 2011 through 2015. DOD issued its fiscal year 2011 and 2012 strategy reports to Congress in October 2010 and November 2011, respectively. GAO found that DOD’s reports addressed two key characteristics: (1) purpose, scope, and methodology and (2) problem definition and risk assessment. However, DOD only partially addressed four other key characteristics of a strategy, including (1) resources, investments, and risk management and (2) organizational roles, responsibilities, and coordination. For example, the reports identified mechanisms that could foster coordination across the department and identified some investment areas, but did not fully identify implementing parties, delineate roles and responsibilities for managing electronic warfare across the department, or link resources and investments to key activities. Such characteristics can help shape policies, programs, priorities, resource allocation, and standards in a manner that is conducive to achieving intended results and can help ensure that the department is effectively managing electronic warfare. DOD has taken steps to address a critical electronic warfare management gap, but it has not established a departmentwide governance framework for electronic warfare. GAO previously reported that effective and efficient organizations establish objectives and outline major implementation tasks. In response to a leadership gap for electronic warfare, DOD is establishing the Joint Electromagnetic Spectrum Control Center under U.S. Strategic Command as the focal point for joint electronic warfare. However, because DOD has yet to define specific objectives for the center, outline major implementation tasks, and define metrics and timelines to measure progress, it is unclear whether or when the center will provide effective departmentwide leadership and advocacy for joint electronic warfare. In addition, key DOD directives providing some guidance for departmentwide oversight of electronic warfare have not been updated to reflect recent changes. For example, DOD’s primary directive concerning electronic warfare oversight was last updated in 1994 and identifies the Under Secretary of Defense for Acquisition, Technology, and Logistics as the focal point for electronic warfare. The directive does not define the center’s responsibilities in relation to the office, including those related to the development of the electronic warfare strategy and prioritizing investments. In addition, DOD’s directive for information operations, which is being updated, allocates electronic warfare responsibilities based on the department’s previous definition of information operations, which had included electronic warfare as a core capability. DOD’s oversight of electronic warfare capabilities may be further complicated by its evolving relationship with computer network operations, which is also an information operations-related capability. Without clearly defined roles and responsibilities and updated guidance regarding oversight responsibilities, DOD does not have reasonable assurance that its management structures will provide effective departmentwide leadership for electronic warfare activities and capabilities development and ensure effective and efficient use of its resources. GAO recommends that DOD should (1) include in its future electronic warfare strategy reports to Congress certain key characteristics, including performance measures, key investments and resources, and organizational roles and responsibilities; (2) define objectives and issue an implementation plan for the Joint Electromagnetic Spectrum Control Center; and (3) update key departmental guidance to clearly define oversight roles, responsibilities, and coordination for electronic warfare management, and the relationship between electronic warfare and cyberspace operations. DOD generally concurred with these recommendations, except that the strategy should include performance measures. GAO continues to believe this recommendation has merit.
There are several levels of DOD organizations that are involved in ballistic missile defense operations. In general, these organizations can be categorized into “tiers” as shown in the figure below: Integrating training is training that includes live participants from more than one tier and/or multiple organizations from within the same tier. Live participants refer to personnel who participate in the exercises using equipment that requires them to operate as they would in an actual ballistic missile defense engagement. According to a Chairman of the Joint Chiefs of Staff Instruction the joint training vision is for everyone required to conduct military operations to be trained under realistic conditions and to exacting standards prior to execution of those operations. The instruction also sets out tenets of joint training, including “train the way you operate” and states that joint training must be based on relevant conditions and realistic standards. In addition, according to joint doctrine for joint operations to counter theater air and missile threats across the range of military operations, coordination between organizations involved in cross-boundary missile defense operations must be rehearsed—i.e., trained—not just planned. Depending on the type of ballistic missile defense engagement, not all four tiers need to be involved in each event for the training to be realistic; however, ballistic missile defense operations generally necessitate integration both horizontally across a tier, and vertically between at least two tiers. For example, engaging a ballistic missile threat may require horizontal coordination across more than one combatant command and multiple elements as well as vertical coordination from the combatant commands down to the elements. Finally, DOD recognizes the importance of integrating ballistic missile defense training horizontally and vertically. DOD’s Strategic Plan for the Next Generation of Training for the Department of Defense considers synchronizing training among the services, combatant commands, and others to be a requirement of training integration and states that an immersive training environment must support full-spectrum operations, including missile defense. To enhance training integration for the BMDS the U.S. Strategic Command, U.S. Joint Forces Command, and MDA began organizing the Ballistic Missile Defense Training and Education Group, which also includes combatant commands and the services in July 2010. According to the draft charter, goals for the group include identifying, evaluating, and coordinating ballistic missile defense training requirements and, in coordination with key ballistic missile defense stakeholders, increasing the effectiveness of ballistic missile defense training by promoting the development and implementation of a standardized training program. DOD faces training challenges as it concurrently develops the elements and transitions the elements to the services to operate them. Table 1 includes a description of selected BMDS elements, the lead service for each element, and shows when each element was initially fielded. In order to facilitate the transition of responsibilities for ballistic missile defense elements—including responsibilities for training—from MDA to the services, MDA has overarching memoranda of agreement with the Army, Navy, and Air Force. Each of these overarching agreements provides a framework for the service and MDA to develop specific agreements on responsibilities, including developing doctrine, training, and facilities requirements for each element. In addition, DOD intends to develop element-specific agreements to specify which organization will fund specific operating and support costs, including training. In 2008, DOD created the BMDS Life Cycle Management Process, in part, to manage the BMDS as a portfolio and develop a ballistic missile defense budget that includes funding for MDA support of ballistic missile defense training. This report is one in a series of reports we have issued on ballistic missile defense. For example, we reported earlier this year that while MDA has improved the transparency and accountability of its acquisition decisions, we found issues limiting the extent to which cost, schedule, and performance can be tracked and unexplained inconsistencies in unit and life-cycle cost baselines. Also this year, we reported that DOD’s implementation of the European Phased Adaptive Approach faces challenges including a lack of clear guidance and life-cycle cost estimates. In addition, in September 2009 we reported that DOD had not identified its requirements for BMDS elements and had not fully established units to operate the elements before making them available for use. DOD generally concurred with our recommendations in these reports, and in their comments indicated plans to take some action to address them. For a list of GAO reports on ballistic missile defense, see the list of Related GAO Products at the end of this report. DOD has identified roles and responsibilities and developed training plans for individual ballistic missile defense elements and combatant commands, but it has not developed an overarching strategy for integrating ballistic missile defense that specifies requirements for training across and among commands and multiple elements. DOD and Joint Staff guidance emphasize the importance of realistic joint training based on relevant conditions and realistic standards. In addition, DOD’s strategic plan for training sets out requirements for training integration including synchronizing DOD component training among the services and combatant commands. The services and combatant commands conduct some integrating training; however, our analysis showed that there are some training gaps such as limited training across more than two tiers and simulated rather than live participation in exercises. For example, only 7 of the 45 exercises we analyzed included live combatant commands, regional operations centers, and tactical units participating together. DOD officials stated that realistic training for the BMDS should include multiple live elements operated by service personnel—rather than simulations—and multiple tiers interacting in the same training scenario, but there are no clear requirements for how much integrating training would be sufficient. GAO’s guide for assessing training programs states that a training program should include the development of an overall training strategy. However, DOD has not developed an overall training strategy for the BMDS because it has not identified an entity to be responsible for doing so. Without a clear strategy for conducting integrating ballistic missile defense training across and among commands and elements, DOD faces the risk that organizations that need to work together may have limited opportunities to realistically interact prior to an actual engagement. We analyzed 45 ballistic missile defense exercises that occurred in fiscal years 2009 and 2010 and found examples of integrating training that occurred across and among tiers. The combatant commands conduct major exercises for training their staffs and assigned forces in their mission-essential tasks—of which ballistic missile defense is one—and hosted 21 exercises that included ballistic missile defense in fiscal years 2009 and 2010. These exercises often included live participation from regional operations centers and some live tactical units. At the tactical level, the Navy requires ships to train at least every 6 months in an integrated ballistic missile defense exercise that always includes live Aegis ballistic missile defense ships and often includes cross-element training with live Patriot units. These exercises also occasionally included integrating training with the Command, Control, Battle Management, and Communications and Ground-based Midcourse Defense elements, and often included live participation from regional operations centers. In addition, U.S. Strategic Command’s Joint Functional Component Command for Integrated Missile Defense sponsors integrating training events synchronized with MDA equipment tests. Although these events focus on testing they also provide integrating training opportunities for combatant command staff, regional operations centers, and tactical units. While DOD is performing some integrating BMDS training, our analysis of ballistic missile defense exercises showed some gaps. For example, we found limited live participation of BMDS tactical units and only 10 of the 45 exercises included more than two tiers. Specifically, only 7 of the 45 exercises that we analyzed included live combatant commands, regional operations centers, and tactical units, and only 1 of those also included all four tiers. Moreover, as can be seen in table 2, live participation of BMDS tactical units was limited mostly to Aegis and Patriot. (More detailed results of GAO’s ballistic missile defense exercise analysis are provided in app. II.) Although most of the exercises we analyzed included the participation of either regional operations centers or tactical units, DOD officials at several organizations stated that more training focused on integrating those two tiers is necessary in order to achieve realistic training as identified in DOD policy. Officials also identified the need for an affordable, scalable, distributed, and fully integrated training capability that would allow for more integrating training with live participants within and across the tiers. To address this need, officials indicated DOD is planning a more robust missile mission training capability to enable integrating training through the tiers, but officials said this capability is early in development and, at this time, does not include tactical-level BMDS elements. GAO’s guide for assessing training programs states that a training program should include the development of an overall training strategy and an organization that is held accountable for achieving training goals. Additionally, DOD officials stated that increased frequency of integrating training would be beneficial but there are no clear requirements for how much integrating training would be sufficient. However, DOD has not developed such a training strategy for the holistic BMDS that specifies clear requirements and standards for integrating training because DOD has not clearly designated an entity to be responsible for integrating ballistic missile defense training across and among combatant commands and services and provided the entity with the authority to do so. Individual combatant commands and services have training responsibilities within their own organizations but generally do not establish training requirements for other organizations. Table 3 below shows training responsibilities of various DOD organizations. The training responsibilities of these DOD organizations do not clearly identify an organization with responsibility for integrating ballistic missile defense training across and among tiers. For example, although U.S. Strategic Command is responsible for synchronizing planning for missile defense, officials explained that the command is only responsible for synchronizing planning for operations and it does not have the responsibility or authority for integrating ballistic missile defense training. U.S. Joint Forces Command is designated as the joint force trainer, but officials explained their role is to support combatant commands’ joint training by providing the technical capabilities for different organizations to train together, not to set training requirements for any particular mission, such as ballistic missile defense. MDA provides initial training for new and upgraded elements, most of the training for the Ground- based Midcourse Defense element and all training for the Command, Control Battle Management and Communications element. MDA is not responsible for developing training requirements for other DOD organizations. In addition, Joint Staff guidance for joint training charges the Chairman with responsibility for formulating policies for joint training and requires the development of training plans, but officials said the training policy generally does not include setting training requirements for any particular mission. DOD recognizes the need for a cross-cutting group to examine BMDS training issues, but its latest effort is structured differently from other groups created to establish joint training requirements and as a result may not be as effective. In 2010, DOD organized a group, called the Ballistic Missile Training and Education Group. According to the group’s draft charter, the department does not have a coordinated ballistic missile defense training and education approach “that will ensure effective synergistic employment of assets…” In addition, the draft charter sets out the group’s goals which include identifying, evaluating, and coordinating ballistic missile defense training requirements and, in coordination with key ballistic missile defense stakeholders, increasing the effectiveness of ballistic missile defense training by promoting the development and implementation of a standardized training program. However, the draft charter does not indicate that the group itself will have the authority to set ballistic missile defense training requirements and standards, or that its members will have the authority to speak on behalf of the organizations they represent. Instead, the group is expected to review issues that members nominate and make recommendations for improving training to the group’s senior leadership—comprised of U.S. Strategic Command, U.S. Joint Forces Command, and MDA—which may, in turn, raise issues to the Missile Defense Executive Board. At a March 2011 meeting, the group identified several issues such as improving distributed training capabilities and training devices. However, the group has not identified the need to develop a strategy for integrating training across and among tiers that would include training requirements and standards. Although DOD officials have expressed confidence in this group, the group is not quite a year old, is still finalizing its charter and its effectiveness in identifying and resolving training issues is unproven. Further, it is not clear that any of the three organizations comprising the group’s senior leadership would have the authority to develop an integrating training strategy or requirements that all tiers must meet. In similar instances, DOD has designated a lead organization with clearly defined responsibilities and the authority to establish joint training requirements. For example, the Joint Staff has issued instructions for Joint Interface Training and for joint training on the Global Command and Control System. In both instances, the instructions defined responsibilities and provided the designated groups with the authority to develop and implement training requirements. Without a clear strategy that specifies requirements and standards for integrating ballistic missile defense training across and among the commands involved, DOD may have difficulty identifying and resolving training gaps. The lack of a strategy also means that some organizations that are developing a capability to increase live participation in integrating training are doing so without guidance or goals on which organizations should participate and at what frequency—factors that may influence the design and capacity of the training capability. In addition, different organizations may develop varying training requirements and priorities for integrating their training programs with other organizations. Further, without a strategy, DOD runs the risk that organizations that need to work together may have limited opportunities to realistically interact prior to an actual engagement and this risk may increase over the next few years as more elements are fielded. DOD lacks visibility over the total resources that may be needed to support ballistic missile defense training since the funds are currently dispersed across MDA and the services, and some of the services’ budget estimates do not separately identify ballistic missile defense training. An additional complication is that agreements between MDA and the services on funding responsibilities and life-cycle cost estimates— which include training—have not been completed and approved for all elements. We compiled budget documents and data from various sources and estimated about $4 billion is planned to support ballistic missile defense training from fiscal years 2011 through 2016 but this number could vary as additional capabilities are added. We also found examples of gaps between training requirements and budgeted resources, such as a $300 million requirement in the THAAD Program that is not included in MDA’s budget plans. DOD and MDA policies identify the need to complete cost estimates and funding responsibilities for elements as they are developed. However, DOD has not yet identified the total resources necessary to support ballistic missile defense training and has not determined the long-term funding responsibilities because there are no procedures or firm deadlines in place requiring that MDA and the services agree on funding responsibilities and complete training cost estimates before elements are fielded. As a result, DOD and congressional decision makers do not have a full picture of the resources that will be needed over time and risk training gaps. DOD’s budget and Future Years Defense Program include funds for ballistic missile defense training, but funds are dispersed across MDA and multiple accounts across the services, making it difficult for DOD to identify the total training resources. Currently, MDA’s budget supports new equipment training for BMDS elements, the portion of combatant command exercises that include ballistic missile defense events, general ballistic missile defense education courses, all training for the Command, Control, Battle Management, and Communications element, and most training for the Ground-based Midcourse Defense element. The Army and Navy budgets support individual, unit, and sustainment training for their elements, and facilities to support this training. We compiled available budget documents and data from MDA and the services and estimated about $4 billion is planned to support ballistic missile defense training from fiscal years 2011 through 2016. While we were able to compile an approximate budget estimate, some of the service’s ballistic missile defense specific training budgets are not easily identifiable since some ballistic missile defense training for the services is provided and funded as part of a more comprehensive training program and some training budget estimates were not able to be identified. For example, the budget estimates to support multimission elements like Aegis and Patriot include training for ballistic missile defense in addition to training for missions other than ballistic missile defense. Furthermore, an Army official was unable to provide budget estimates for the AN/TPY-2 radar from fiscal years 2011 to 2016 because they only recently began using the Army’s budget development system and have not yet estimated costs across the Future Years Defense Program. Table 4 below summarizes GAO’s compilation of MDA and the services’ budget estimates for training. In addition to the limitations discussed above, funding responsibilities may become increasingly dispersed as DOD transitions responsibilities for the elements from MDA to the services. For example, the Army’s budget for the THAAD element will increase over time as the Army assumes full responsibility for individual training in fiscal year 2015. Also, if a lead service is designated responsible for the Command, Control, Battle Management, and Communications element, some of the training and funding responsibilities for that element would likely transfer from MDA to the lead service. Another factor that complicates estimating the resources to support ballistic missile defense training is that MDA and the services have not fully identified funding responsibilities and life-cycle cost estimates for each of the BMDS elements. MDA’s Acquisition Directive identifies the need to develop life-cycle cost estimates—which include training—for the elements at certain phases of development. The Strategic Plan for the Next Generation of Training for the Department of Defense developed by the Office of the Under Secretary of Defense for Personnel and Readiness highlights the importance of aligning resources to meet training goals. We found that eight of the nine BMDS elements included in our analysis have been fielded, yet planning documents detailing the transition of training responsibilities and life-cycle cost estimates—which include training costs—have not been fully developed and approved for about half of the fielded elements with a designated lead service. In addition, three of the completed agreements do not include service- specific funding to support training. As a result, DOD does not have element-specific agreements or approved training cost estimates for MDA and the services to use in budget development. In addition to the overarching memoranda of agreement, which include a general description of MDA and service roles and responsibilities for the elements, DOD intends for MDA and the services to develop specific agreements for each element that would include funding agreements with details on MDA and the services’ funding responsibilities for training as the element transitions from MDA to the service. However, MDA and the services have had difficulty completing these element-specific agreements, and to date have only fully completed agreements for three out of seven BMDS elements requiring element-specific agreements. For example, officials from the Office of the Secretary of Defense for Acquisition, Technology, and Logistics stated that MDA and the Army have had difficulty agreeing on funding for the AN/TPY-2 radar and have delayed the completion of the agreement until the Missile Defense Executive Board issues further guidance. Furthermore, while officials from the Office of the Secretary of Defense for Acquisition, Technology, and Logistics are responsible for monitoring the completion of the agreements and have identified very general deadlines (by fiscal year) to complete them, officials stated that the completion of the agreements is not schedule driven. Officials also stated that while the remaining element- specific agreements are in staffing, in some cases the services and MDA have not agreed on completion times and that they are uncertain when the agreements will be finalized. The overarching memoranda of agreement also identify the need for MDA and the services to complete joint life-cycle cost estimates for each of the elements, which would include training cost estimates. MDA and the Army have signed an agreement explaining how they will work together to develop operations and support cost estimates to inform their budgets for the THAAD, Ground-based Midcourse Defense, and AN/TPY-2 elements. However, according to Army officials, some cost estimates are still in development and have not been approved by the Army Cost Review Board and none of the operations and support cost estimates— including training cost estimates—have been reviewed by DOD’s Cost Assessment and Program Evaluation office. For example, Army officials stated that the Army Cost Review Board has not approved the estimates for THAAD, Ground-based Midcourse Defense, and forward-based AN/TPY-2 radar elements. Officials stated that while the methodology behind the MDA and Army cost estimates is accurate, the Army does not agree with some assumptions on which the cost estimates are based. For example, Army officials said that the most recent THAAD estimate did not include unit training costs to relocate THAAD batteries, yet that estimate was used to inform the Army’s budget request for THAAD operations. Furthermore, DOD officials confirmed that they only recently began developing operations and support cost estimates with the Navy for the Aegis ballistic missile defense element. DOD has not yet identified the total resources necessary to support ballistic missile defense training and has not determined the long-term funding responsibilities because there are no procedures or firm deadlines in place to ensure that either the element-specific agreements or life-cycle cost estimates—to include training—be completed before elements are fielded or in time to inform budget development. Without completed memoranda of agreement or cost estimates for supporting MDA and service ballistic missile defense training, there is no transparency over the total resources that DOD may need to fully support ballistic missile defense training. As a result, DOD is at risk of training gaps that may prevent the services and combatant commands from meeting their training requirements. For example, while the Army and MDA are working to prioritize funding to address training for the THAAD element, Army officials identified a $308.6 million discrepancy between MDA’s funding and the Army’s documented equipment requirements to support individual and unit training. Army officials said that without this equipment, they will have difficulty keeping up with the demand for individual and unit training. Specifically, some critical tasks that would normally be trained at the institution would need to be performed by the units on actual tactical equipment rather than training devices, which would result in additional wear and tear on tactical equipment and increase overall training costs. In addition, the Army has identified a $960,000 requirement to upgrade training materials to support sensor manager training for the AN/TPY-2 radar. However, MDA has not funded this requirement, and an Army official indicated that without upgraded training materials, properly trained crews may not be available to operate the radar. Without MDA and service cooperation to develop complete and transparent ballistic missile defense training cost estimates, decision makers do not have the necessary visibility to budget for ballistic missile defense training or identify and address training shortfalls, an issue that may become more problematic as additional elements are fielded. Since training to support ballistic missile defense has been identified as a high priority within the department, the lack of transparency in the funds needed to support ballistic missile defense training hinders DOD’s ability to assess competing priorities and decide how to allocate scarce resources to meet training goals. Defending against ballistic missile attacks requires quick responses and an integrating training strategy is important to connect seams where commands, tiers, or elements must work together. However, there are no DOD requirements and standards for integrating training across and among all of the tiers. Although individual organizations are taking some initial steps, training across and among tiers is still relatively infrequent. In similar instances, DOD has issued guidance to designate an organization with the responsibility and authority for establishing joint training requirements. However, DOD has not designated an organization with the responsibility and authority to develop a strategy that would include specific requirements and standards for integrating training across and among all of the tiers for ballistic missile defense. As a result, the department runs the risk that personnel may have had limited opportunities to interact across the training tiers and elements under realistic conditions prior to an actual ballistic missile defense engagement. A number of DOD organizations have identified the need for an affordable, scalable, distributed, and fully integrated training capability that would develop the capabilities necessary for all tiers to experience realistic training at a frequency to prepare them for ballistic missile defense operations. Without an entity responsible for developing an integrating training strategy, the department’s ability to develop requirements and standards for integrating training across and among all of the tiers, and to assess the advantages and disadvantages of a standardized approach for improving integrating training capabilities may be hindered. Given that DOD has identified ballistic missile defense as a high-priority mission area and has expended substantial resources to develop the BMDS, it is important that funding for training be clearly and easily identified to ensure that training priorities are being met and budgets are aligned to support training requirements and address any training shortfalls. No full picture of the total service and MDA BMDS training budget exists since funding is dispersed across the department and there is no procedure or deadline mandating that funding agreements and training cost estimates be completed and approved in time to inform annual budget development. As a result, DOD and congressional decision makers lack visibility over the ballistic missile defense training budget to assess whether budgeted resources are adequate to support ballistic missile defense training and ensure there are no significant training gaps. Until the department addresses these challenges, DOD will likely face increasing risks over time to its ability to provide necessary integrating training as more elements are developed and fielded. We recommend that the Secretary of Defense take the following three actions: To enhance DOD’s ability to identify and resolve issues in integrating ballistic missile defense training across and among combatant commands and services and to improve training realism, we recommend that the Secretary of Defense, in consultation with the Under Secretary of Defense for Personnel and Readiness and the Chairman of the Joint Chiefs of Staff, issue guidance that:  designates an entity to be responsible for integrating training across and among combatant commands and elements and provide that entity with the authority to develop an overall ballistic missile defense training strategy which includes specific requirements and standards for integrating training and identifying and resolving any gaps in capabilities to enhance integrating training across and among all tiers (or combatant commands and elements). To improve the transparency of the resources to support ballistic missile defense training requirements and to inform budget development, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force and the Director of the Missile Defense Agency to: set a firm deadline to complete training cost estimates and element- specific agreements for elements already fielded and establish procedures that require the training cost estimates and element- specific funding agreements delineating funding responsibilities between MDA and the services be completed before additional elements are fielded; and  establish procedures that require annual development and reporting of the total BMDS training budget (i.e., all Missile Defense Agency and service costs for individual, unit, and sustainment training and combatant command and service exercise costs). In written comments on a draft of this report, DOD concurred with one recommendation and partially concurred with two recommendations. Although DOD generally concurred with our recommendations, DOD’s response did not include specifics about when it intended to complete actions to implement these recommendations. Considering that DOD has identified ballistic missile defense as a high-priority mission area, we believe it is important that DOD take action as soon as possible. After we received DOD’s comments, the department completed its security review and determined that this report is unclassified and contains no sensitive information. DOD’s comments are reprinted in their entirety in appendix III. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our recommendation that DOD issue guidance that designates an entity to be responsible for integrating training across and among combatant commands and elements and provide that entity with the authority to develop an overall ballistic missile defense training strategy. The department further stated that Office of the Under Secretary of Defense for Personnel and Readiness and U.S. Strategic Command, with the assistance of the Joint Staff will provide the policy and required advocacy for the development of an integrated training strategy for ballistic missile defense. Although DOD concurred with this recommendation and stated its intention to issue policy for developing an integrating training strategy, the department did not state when it intended to do so. Since defending against ballistic missile attacks requires a quick response, it is important that DOD develops an integrating training strategy to connect seams where commands, tiers, or elements must work together. Therefore, we believe that DOD should issue this policy as soon as possible. DOD partially concurred with our recommendation that the Secretaries of the Army, Navy, and Air Force and the Director of the Missile Defense Agency set a firm deadline to complete training cost estimates and element-specific agreements for elements already fielded and establish procedures that require the completion of training cost estimates and element-specific funding agreements delineating funding responsibilities between MDA and the services before additional elements are fielded. In its comments, DOD stated that new ballistic missile defense capabilities are essential to defense and must not be delayed. The department acknowledges the benefit of establishing training cost estimates but believes that these estimates and funding agreements can be developed in parallel with the fielding of additional capabilities. Although DOD partially concurred, DOD did not state that it would set a firm deadline to implement the recommendation. DOD generally requires that weapons systems complete life-cycle cost estimates—including training cost estimates—prior to a system being fielded. As we noted in our report, DOD has not completed cost estimates or funding agreements. Further, we reported that MDA and the services have had difficulty completing the agreements for each element that would include details on MDA and the services’ funding responsibilities as the elements transition from MDA to the services. Without completed and approved training cost estimates to inform the funding agreements and annual budget development, there is no clear identification of the resources that DOD may need to support ballistic missile defense training and DOD is at risk of training gaps. In fact, we noted examples of discrepancies between funding and training requirements. Given that DOD has identified ballistic missile defense as a high-priority mission area, has had difficulty completing cost estimates and funding agreements in the past, and there are already examples of some funding gaps, we continue to believe that DOD should establish a firm deadline to ensure that training cost estimates and element-specific agreements are completed before additional elements are fielded. Finally, DOD partially concurred with our recommendation that the Secretaries of the Army, Navy, and Air Force and the Director of the Missile Defense Agency establish procedures that require annual development and reporting of the total BMDS training budget (i.e., all Missile Defense Agency and service costs for individual, unit, and sustainment training and combatant command and service exercise costs). In its comments, DOD stated that the department defines total ballistic missile defense training costs as those direct or incremental ballistic missile defense system training costs associated with the fielding and sustaining element mission readiness for ballistic missile defense capabilities. DOD further stated that the Office of the Under Secretary of Defense for Personnel and Readiness will work with the services and the Missile Defense Agency to develop policy for capturing and reporting total ballistic missile defense training costs as defined above. As we stated in our report, no full picture of the total service and MDA BMDS training budget exists since funding is dispersed across the department and there is no procedure or deadline mandating that funding agreements and training cost estimates be completed and approved in time to inform annual budget development. As a result, DOD and congressional decision makers do not have a full picture of the resources to inform budget development and risk training gaps. Considering that funding for training could face significant budget pressures amid the department’s competing demands for current operations, acquisitions, and personnel expenses, we continue to believe it is important that DOD implement the policy for developing and reporting cost estimates for ballistic missile defense training as soon as possible. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, the combatant commands, the Secretaries of the Army, Navy, and Air Force, and the Director of the Missile Defense Agency. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1816 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has developed a plan for integrating ballistic missile defense training across and among commands and multiple elements we reviewed combatant command and service training plans and assessed whether these plans addressed ballistic missile defense training. To determine the extent to which DOD has identified training roles, responsibilities, and commensurate authorities, we assessed DOD, combatant command, and service instructions, policies, and training plans to identify where training roles, responsibilities, and authorities were clearly identified and whether these documents clearly identified roles, responsibilities, and authorities for integrating training across and among commands and services. Finally, we discussed our results with DOD officials to corroborate our analysis and discussed any areas where responsibilities may not be clearly identified. To quantify the extent to which the Ballistic Missile Defense System, (BMDS) training is integrated horizontally across the combatant commands and elements and vertically from the combatant commands down through the elements (i.e. through all tiers) we first developed a standard definition of the training tiers using the description in the Joint Functional Component Command for Integrated Air and Missile Defense’s Fiscal Year 2010 through 2011 Annual Training Plan as a guide and confirmed the definitions with various DOD commands. Next, we gathered and analyzed information on 45 training exercises that included ballistic missile defense and were conducted during fiscal years 2009 and 2010. We included all of the exercises led by combatant commands, operations centers, and the services within this time frame. We also included an average representation of the participants in weekly training provided by the Joint Staff to officials at tier one. For each exercise, we gathered information to identify participants at each tier and whether each participant was live or simulated. We summarized the data and corroborated the results with the commands that provided the information. To determine the extent to which DOD has identified and budgeted for the resources to support ballistic missile defense training, we gathered and analyzed available training budget documents and data provided by the Missile Defense Agency (MDA) and the services to support ballistic missile defense training from fiscal years 2011 through 2016 to include budget estimates for training in schools, exercises and for facilities such as simulators. To determine the funding for Patriot unit training, Army officials provided the average estimated training cost for one unit that the Army uses to develop its budget and we multiplied that amount by the total number of units across fiscal years 2011 to 2016. We documented instances where the services could not identify training resources specific to ballistic missile defense, and reported that these budget estimates are to support training for missions in addition to ballistic missile defense or instances that ballistic missile defense specific budget estimates were unavailable. We also obtained documentation from MDA and the services on their actual costs to support ballistic missile defense training in fiscal year 2010. We interviewed DOD, combatant command, and service officials to corroborate our compilation of available training budget estimates, and to identify areas where there may be a mismatch or shortfall between training requirements and budget estimates. We interviewed MDA and service officials to determine whether element- specific annexes and joint life-cycle cost estimates for each of the elements have been completed and approved. To ensure the reliability of our data we provided the tables showing the estimated budgeted amounts for ballistic missile defense training to DOD and service officials for review. Furthermore, to assess the reliability of the computer- processed data provided by the Army to support their ballistic missile defense training budgets, we interviewed knowledgeable officials about the data and internal controls on the system that contains them. We determined that the data were sufficiently reliable for the purposes of this audit. We conducted this performance audit in accordance with generally accepted government auditing standards from July 2010 to July 2011. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Patricia W. Lentini, Assistant Director; Brenda M. Waterfield; Randy F. Neice; Meghan E. Cameron; Joseph J. Watkins; Rebecca Shea; Joel Grossman; Karen Nicole Harms; and Erik Wilkins-McKee made key contributions to this report. Missile Defense: Actions Needed to Improve Transparency and Accountability. GAO-11-372. Washington, D.C.: March 24, 2011. Ballistic Missile Defense: DOD Needs to Address Planning and Implementation Challenges for future Capabilities in Europe. GAO-11-220. Washington, D.C.: January 26, 2011. Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization, Transparency, and Accountability Challenges. GAO-11-179R. Washington, D.C.: December 21, 2010. Defense Acquisitions: Missile Defense Program Instability Affects Reliability of Earned Value Management Data. GAO-10-676. Washington, D.C.: July 14, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Defense Acquisitions: Missile Defense Transition Provides Opportunity to Strengthen Acquisition Approach. GAO-10-311, Washington, D.C.: February 25, 2010. Missile Defense: DOD Needs to More Fully Assess Requirements and Establish Operational Units before Fielding New Capabilities. GAO-09-856. Washington, D.C.: September 16, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Defense Management: Key Challenges Should be Addressed When Considering Changes to Missile Defense Agency’s Roles and Missions. GAO-09-466T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: March 13, 2009. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Ballistic Missile Defense: Actions Needed to Improve the Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Defense Acquisitions: Missile Defense Agency’s Flexibility Reduces Transparency of Program Cost. GAO-07-799T. Washington, D.C.: April 30, 2007. Missile Defense: Actions Needed to Improve Information for Supporting Future Key Decisions for Boost and Ascent Phase Elements. GAO-07-430. Washington, D.C.: April 17, 2007. Defense Acquisitions: Missile Defense Needs a Better Balance between Flexibility and Accountability. GAO-07-727T. Washington, D.C.: April 11, 2007. Defense Acquisitions: Missile Defense Acquisition Strategy Generates Results but Delivers Less at a Higher Cost. GAO-07-387. Washington, D.C.: March 15, 2007. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goals. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System.GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000.
Since 2002, the Department of Defense (DOD) has spent over $80 billion on developing and fielding a Ballistic Missile Defense System (BMDS) comprised of various land-and sea-based elements employed by multiple combatant commands and services. Since the time available to intercept a missile is short, integrating training among all organizations involved is important to connect seams where commands and elements must work together. In response to House Report 111-491 which accompanied H.R. 5136, GAO assessed the extent to which DOD has (1) developed a plan for integrating ballistic missile defense training across and among commands and multiple elements, and identified training roles, responsibilities, and commensurate authorities; and (2) identified and budgeted for the resources to support training. To do so, GAO analyzed DOD training instructions, plans, exercises, and budgets and assessed the extent to which the Missile Defense Agency (MDA) and the services have agreed on training cost estimates and funding responsibilities. DOD has identified roles and responsibilities and developed training plans for individual ballistic missile defense elements and combatant commands, but has not developed a strategy for integrating training among ballistic missile defense organizations and elements in a manner that requires them to operate as they would in an actual engagement. A Joint Staff Instruction sets out tenets of joint training including "train the way you operate" and DOD guidance requires synchronization of training among the services and combatant commands. The services and combatant commands are conducting some integrating training--training across and among combatant commands and services--but our analysis of exercises shows that there may be some training gaps. For example, although some exercises included more than one combatant command, few included multiple live elements. GAO's guide for assessing training programs states that a training program should include an overall training strategy and an organization that is held accountable for achieving training goals. However, DOD has not developed an overall strategy that includes requirements and standards for integrating ballistic missile defense training because DOD has not clearly designated an entity to be responsible for integrating training across and among all organizations involved and provided it with the authority to do so. Without an overall strategy that includes requirements and standards for integrating training, DOD runs the risk that the organizations that need to work together may have limited opportunities to realistically interact prior to an actual engagement. DOD lacks visibility over the total resources that may be needed to support ballistic missile defense training since the funds are currently dispersed across MDA and the services, and some of the services' budget estimates do not separately identify ballistic missile defense training. A further complication is that agreements between MDA and the services on funding responsibilities and life-cycle cost estimates--which include training--have not been completed and approved for all elements. GAO compiled budget documents and data from various sources and estimated about $4 billion has been planned for ballistic missile defense training from fiscal years 2011 through 2016. However, some of the services' resources for ballistic missile defense training are not easily identifiable since some training is funded as part of a more comprehensive training program. GAO found examples of gaps between training requirements and budgeted resources, such as a $300 million requirement in the Terminal High Altitude Air Defense program that is not included in MDA's budget plans. DOD and MDA policies identify the need to complete cost estimates and funding responsibilities for elements as they are developed; however, there are no procedures or deadlines in place requiring that MDA and the services agree on funding responsibilities and complete training cost estimates before elements are fielded. As a result, DOD and congressional decision makers do not have a full picture of the resources that will be needed over time and risk training gaps. GAO recommends that DOD designate an entity with authority to develop a strategy for integrating training, and set a deadline to complete training cost estimates and funding agreements and report total BMDS training cost estimates. DOD generally concurred with the merits of our recommendations but did not commit to a timeframe for implementation.
Foreign nationals who wish to visit the United States, including business travelers and tourists, must generally obtain a nonimmigrant visa. The majority of travelers visiting the United States from Mexico receive an NIV Border Crossing Card, which is valid for 10 years. In order to obtain a Border Crossing Card, applicants must generally: (1) schedule an appointment for a visa interview at a U.S. consulate, (2) fill out an application and pay applicable fees, (3) have their photos taken and fingerprints collected at a U.S. consulate, (4) have their information checked in the Consular Lookout and Support System—State’s name- check database that consulates use to access critical information for visa adjudication, and (5) have an interview with a consular officer, who is responsible for making the adjudication decision. In 1996, Congress passed the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA), which required that every Border Crossing Card issued after April 1, 1998, contain a biometric identifier, such as a fingerprint, and be machine readable. The law also mandated that all Border Crossing Cards issued before April 1, 1998, would expire on October 1, 1999, regardless of when their validity period ended. This deadline was extended by Congress two times, first to September 30, 2001, and then to September 30, 2002. The passage of IIRIRA created a significant surge in Mission Mexico’s NIV workload, as Border Crossing Card holders sought to obtain the new visas before the congressionally mandated expiration date. This culminated in a historic high in NIV workload in fiscal year 2001, when the mission processed 2,869,000 NIV applications. We have previously reported on challenges State faced in managing its NIV workload. Among other things, we found that NIV applicants have often had to wait for extended periods of time to receive appointments for interviews. Believing that wait times for NIV interviews were excessive, in February 2007, State announced a worldwide goal of interviewing NIV applicants within 30 days. In the year before the 30-day goal was announced, the average wait time across the consulates in Mexico had been as high as 73 days; by the time of the announcement of the 30-day goal, however, Mission Mexico had already successfully reduced the average wait time to less than 30 days at all but one of its posts. Since February 2007, the mission has successfully kept the average wait time among the consulates at less than 30 days. In response to recommendations in the 9/11 Commission report, the Intelligence Reform and Terrorism Prevention Act of 2004, as amended, required that the Secretary of Homeland Security, in conjunction with the Secretary of State, develop and implement a plan that requires United States citizens to provide a passport, other document, or combination of documents that the Secretary of Homeland Security deems sufficient to show identity and citizenship when entering the United States from certain countries, including Mexico. This will represent a significant change for many U.S. citizens living in Mexico, who have until recently been able to routinely cross between the United States and Mexico with more limited documentation. The Department of Homeland Security (DHS) and State are implementing these requirements through WHTI. DHS implemented WHTI at all air ports of entry into the United States on January 23, 2007, and plans to implement the requirements at land and sea ports of entry beginning in June 2009, assuming that DHS and State can certify 3 months in advance that certain criteria have been met, as required under the law. Ten years after the first surge in demand for Border Crossing Cards began in fiscal year 1998, State anticipates another surge in NIV demand in Mexico as these cards begin to expire and millions of card holders apply for renewals at U.S. consulates. In addition to this cyclical surge in demand caused by the expiring Border Crossing Cards, State officials anticipate that Mission Mexico will continue to experience steady growth in demand from first-time visa applicants. To assist in preparing for these increases, State has developed forecasts of the expected future NIV workload in Mexico. The NIV projections and forecasting methodology discussed in this report are based upon data State provided to us in February and April 2008. On June 18, State informed us that it has developed revised NIV forecasts for Mission Mexico based upon an alternative methodology. We have not yet had time to analyze these NIV forecasts or incorporate them into this testimony, but we may include a discussion of them in our final report, which is scheduled to be completed at the end of July 2008. State’s forecasts, as of April 2008, anticipate that the upcoming surge in NIV demand will follow a pattern similar to the previous Border Crossing Card surge from fiscal years 1998 to 2002, as shown in figure 1. According to the forecasts, the surge will begin in fiscal year 2008, with missionwide NIV demand peaking at a little more than 3 million applications in fiscal year 2011—a 103 percent increase in demand from fiscal year 2007. The forecasts show the surge beginning to abate in fiscal year 2012. In addition to the missionwide forecast, State has developed demand forecasts for individual consulates. As shown in figure 2, State’s forecasts anticipate that Mexico City will have the highest levels of demand, with applications growing to over 580,000 in fiscal year 2010. While Mexico City is projected to have the highest overall demand, State anticipates that the steepest increases in demand will occur at border posts. This follows a pattern similar to the previous Border Crossing Card surge, where the border consulates assumed a greater share of the total mission workload during the surge, with this share then diminishing again at the surge’s end. Estimating future NIV demand is inherently uncertain, and State acknowledges that several factors could affect the accuracy of its April 2008 NIV demand forecasts. First, the forecasts are based heavily upon Change Navigators’ 2005 Consular Affairs Futures Study (CAFS), which generated NIV demand forecasts for various high-volume and high-growth missions around the globe, including Mexico. Thus, the extent to which the underlying CAFS numbers prove to be accurate affects State’s revised forecasts. While the CAFS includes a general analysis of how various demographic, economic, and political factors impact NIV demand across countries, it does not explain how it arrived at its specific forecasts for Mexico. Based upon our review of the forecasts, it appears that the CAFS authors relied primarily upon historical workload data from the previous Border Crossing Card surge, but we could not assess how, if at all, other considerations were factored into the forecasts. Second, methodological issues associated with State’s April 2008 NIV forecasts may affect their accuracy in projecting demand. For example, State relied heavily on actual demand data from fiscal year 2007 to revise the CAFS forecasts, in order to try to better account for growth in demand from first-time visa applicants. In doing so, State assumed demand for fiscal year 2007 was representative of the underlying long-term growth in NIV demand. However, this is not necessarily the case, as State officials acknowledge demand may have been artificially high in fiscal year 2007 as posts worked off backlogs that had accumulated from previous years. State officials also noted that they chose to be conservative and assume all Border Crossing Card holders would renew their cards when they expire. However, this is not likely to happen, as a portion of Border Crossing Card holders have had their cards lost or stolen and already had them replaced, while others have either legally or illegally immigrated to the United States and will not be returning to renew their cards. Consequently, the forecasts could prove to be higher than actual demand depending on the share of Border Crossing Card holders who do not seek a renewal at the expiration of their card. State’s approach to forecasting NIV workload, based on historical precedent and underlying growth in demand, and other factors, provide a reasonable basis for addressing the anticipated surge in NIV demand. State has detailed data on the number of Border Crossing Cards issued during the previous surge and when they are expiring, which gives it a strong basis for its projections. Further, even if the NIV forecasts do not prove completely accurate, State officials do not expect significant risks for several reasons. First, State officials believe that the forecasts are conservative, with NIV demand likely to be lower than forecasted. Second, State intends to avoid relying on the exact numbers in the forecasts and is instead using them as a rough guide in developing plans to meet the upcoming surge in NIV workload. Third, State officials believe they have developed these plans with sufficient flexibility to be able to respond as needed if actual workload deviates from the forecasts. Finally, State plans to continually track demand at the consulates as the NIV surge unfolds and will revise these forecasts periodically. In addition to the surge in NIV workload, Mission Mexico will also experience a surge in its passport workload as a result of the implementation of WHTI at air ports of entry in January 2007 and its subsequent, intended implementation at land ports in June 2009. According to State officials, the mission has already seen a significant increase in its passport workload as U.S. citizens living in Mexico have begun to apply for passports in response to the new documentary requirements. Mission Mexico’s passport and CRBA workload, which State tracks together because both types of applications are handled by consular officers in posts’ American Citizen Services units, grew to 34,496 applications in fiscal year 2007, a 77 percent increase from fiscal year 2006. Despite the expected increases, passport workload will continue to be only a fraction of Mission Mexico’s workload, relative to NIV applications. While State expects passport workload in Mexico to continue to increase significantly in the coming years, it is difficult to predict precisely what the magnitude of this increase will be. Unlike with the NIV surge, there is not a clear historical precedent to the WHTI surge. Additionally, there is a great deal of uncertainty regarding the number of U.S. citizens living in Mexico and the number of these citizens who are potential passport applicants. Therefore, efforts to forecast increases in passport workload due to WHTI are extremely challenging. Nonetheless, State has developed rough estimates of Mission Mexico’s passport and CRBA workload with the implementation of WHTI. These estimates are based on the input of experienced consular officers because the lack of data on U.S. citizens living in Mexico made any type of statistical analysis problematic. Based upon State’s estimates, Mission Mexico’s WHTI workload is projected to peak at 73,000 passport and CRBA applications in fiscal year 2009 with the implementation of WHTI at land ports of entry. State anticipates that passport and CRBA workload will continue at that peak rate in fiscal year 2010 and then begin to decline. In its estimates, State predicts that from fiscal years 2007 to 2009, workload will increase by around 177 percent for Mission Mexico. To this point, State has not revised its WHTI estimates based on workload in fiscal year 2007, or year to date in the current fiscal year, even though the workload estimates were low in fiscal year 2007. State says it has not needed to revise its estimates at this point, because posts have been able to keep up with workload increases without the need for additional resources. In addition, rather than focusing on developing precise workload estimates in order to prepare for the surge, State has instead chosen to pursue strategies designed to provide it with the flexibility to respond to increases in workload as they occur—particularly as a more limited number of resources will be needed to cover increases in passport and CRBA applications than NIV applications, given their small share of Mission Mexico’s overall consular workload. To keep pace with the expected NIV renewal surge, State is increasing the total number of hardened interview windows in the consulates’ NIV sections by over 50 percent before the demand peaks in 2011. State added windows to the consulate in Hermosillo in fiscal year 2007 and will soon be adding windows to the consulates in Monterrey and Mexico City. In addition, new consulate compounds in Ciudad Juarez and Tijuana will result in additional windows for adjudicating NIV applications. The new facility in Ciudad Juarez is set to open in September 2008, and construction on the new building in Tijuana began this past April. Once completed, these projects will provide Mission Mexico with the window capacity to interview about 1 million additional NIV applicants per year. Table 1 compares the number of interview windows available in fiscal year 2007 to the number that will be available by fiscal year 2011, when NIV demand peaks. Consulate officials at the posts we visited generally expressed confidence that they will have sufficient window capacity to keep pace with the expected NIV demand and avoid excessive wait times for interviews beyond State’s standard of 30 days. As shown in figure 3, our analysis of expected window capacity also indicates that Mission Mexico generally appears to have enough window capacity to keep pace with projected demand, based on the April 2008 projections. However, State officials acknowledge that two posts, Nuevo Laredo and Matamoros, will not have adequate window capacity during the NIV surge. Consequently, NIV applicants may face longer wait times for an interview appointment at these posts. State officials noted that individuals who would typically apply at one of these two posts will have the option to schedule appointments at the relatively nearby consulate in Monterrey, which is expected to have excess window capacity during the surge in demand. At other posts, the potential shortfall in window capacity, reflected in figure 3, appears to be small enough that it can likely be managed by extending hours that windows are open, if necessary. Although Guadalajara also appears to have a significant shortfall, consular officials there believe the post should be able to absorb the increased workload with the number of windows available as long as they have enough staff to work the windows in shifts to keep them open all day, if necessary. In addition to the increase in hardened windows, Mission Mexico requires a significant increase in adjudicators over the next few years. Based on NIV and passport workload projections, provided in April 2008, State estimates it will need 217 adjudicators throughout Mission Mexico in fiscal year 2011, which is the expected peak year of the surge in NIV demand. This number is an increase of 96 adjudicators, or about 80 percent, over the number of adjudicator positions in place in fiscal year 2007. State may revise its staffing plans as it generates updated forecasts. State plans to meet its staffing needs during the expected workload surge primarily by hiring a temporary workforce of consular adjudicators that can be assigned to posts throughout Mission Mexico, depending on each post’s workload demands. Figure 4 shows the number of temporary adjudicators and career adjudicators planned for Mission Mexico in fiscal year 2011. State officials noted that relying on a temporary workforce allows Mission Mexico to avoid having excess staff after the workload surge and reduces costs per staff compared to permanent hires. State has budgeted for about 100 temporary adjudicators to be in place during the surge in workload demand, although State officials noted that these budgeted funds could be reprogrammed if fewer than expected adjudicators are needed. State has already posted the job announcement on its Web site and expected to begin placing these additional temporary adjudicators at posts in fiscal year 2009. State officials noted that they will try to fill slots gradually to help posts absorb the additional staff. The temporary hires will be commissioned as consular officers with 1- year, noncareer appointments that can be renewed annually for up to 5 years. They will also receive the same 6-week Basic Consular Course at the Foreign Service Institute in Arlington, Virginia, as permanent Foreign Service officers. These individuals must be U.S. citizens, obtain a security clearance, and be functionally fluent in Spanish. Housing in Mexico for the temporary adjudicators will be arranged for by the State Bureau of Consular Affairs in Washington, D.C., through contract services, which will provide greater flexibility to move adjudicators from one post to another, if necessary. As figure 4 indicates, posts in Monterrey, Mexico City, Ciudad Juarez, and Tijuana are expected to be the heaviest users of temporary adjudicators. Consequently, these posts would be at greatest risk of increased NIV backlogs if temporary adjudicator slots cannot be filled as needed or if their productivity is not as high as anticipated. However, State officials believe they have an adequate pool of potential candidates from among returning Peace Corps volunteers, graduates of the National Security Education Program, eligible family members, and retired Foreign Service officers. These officials noted that they recently began reaching out to targeted groups of potential applicants and have already received strong interest. Furthermore, officials from the posts we visited were confident that State’s plan to provide them with additional consular officers would enable them to keep pace with workload demand. Post officials anticipate the same level of productivity and supervision requirements as they would expect from new career Foreign Service officers. The officials noted that new consular adjudicators typically take about 2 months of working the NIV interview windows to reach the productivity levels of more experienced adjudicators. State began a pilot program in the spring of 2008 at two posts, Monterrey and Nuevo Laredo, to outsource part of the NIV application process, including biometric data collection, to an off-site facility. The pilot is part of an effort by State to establish a new service delivery model for processing visas worldwide in response to long-term growth in demand for visas. State envisions expanding this model throughout Mexico and other high-demand posts worldwide through a formal request for proposal process. State also envisions the possibility of providing off-site data collection facilities serving NIV applicants in cities that do not have consulates. In Monterrey, the pilot made space available in the consulate facility to add much needed NIV interview windows. The pilot is implemented by a contractor that handles functions that do not require the direct involvement of a consular officer, including scanning of applicants’ fingerprints and passports, live-capture digital photograph, and visa passback. Consular officers at these two posts focus on their “core mission” of making adjudication decisions after the contractor has electronically transferred the applicants’ application and biometric data. The cost of outsourcing these functions is covered through an additional fee of $26 paid by the applicants. Consulate officials at the posts involved in the pilot are responsible for monitoring the performance of the contractor through the use of surveillance cameras, random visits to the off-site facility, and validation reviews of NIV applications to check for incidence of fraud or incorrect information. According to State officials, the contractor does not have the ability to alter any of the data it collects, and a U.S. citizen with a security clearance is on site to manage the facility. Consular officials in Monterrey stressed the importance of monitoring contractor employees to help ensure they do not coach applicants. State officials stated that the department intends to assess the pilot to ensure that the technological challenges of remote biometric data collection and data transfer have been overcome. They will also assess whether the new software involved presents the data to consular officers in a user-friendly format to facilitate the adjudication. In addition, State will monitor adjudication rates at the participating posts. State has neither established specific milestones for completing the pilot nor provided us with any metrics that would be part of an assessment of the potential impact on productivity, fraud, or security. In another step to help posts keep pace with NIV demand, Mission Mexico has also begun to waive interviews of NIV renewal applicants allowable under certain circumstances established by federal law and State regulations. State recently provided guidance to posts worldwide on waiving interviews for certain applicants, following the transition to the collection of 10 fingerprints and technology allowing reuse of fingerprints. The policy only applies to applicants seeking to renew their biometric NIVs within 12 months of expiration. Consular officers retain the discretion to require any applicant to appear for an interview, and no applicant may have an interview waived unless they clear all computer- based security screening. According to State guidance, consular officers will also have the discretion to waive interviews of applicants as part of the off-site data collection model being piloted in Monterrey and Nuevo Laredo, when prints collected off site match with the applicant’s fingerprints already in the system. According to State officials, this will be possible beginning in 2009, when Border Crossing Cards issued after 1999 containing biometric data start to expire. The Monterrey and Ciudad Juarez posts have already begun to waive interviews of applicants renewing NIVs and found significant productivity gains. As a result, officers there were able to adjudicate cases more rapidly and better utilize window capacity, according to consular officials. These posts also found no significant difference in denial rates for NIV renewal applicants who were interviewed compared to those whose interviews were waived, although post and Bureau of Consular Affairs officials noted it was necessary to continue monitoring the effect of waiving interviews. These officials also highlighted the need to adjust consular training to be consistent with State’s current guidance on waiving interviews under certain circumstances. Posts in Mexico will also be increasing resources for adjudicating additional passport applications, which are expected to peak in fiscal year 2009. Although the volume of passport applications is much smaller than NIV applications, adjudicating passport applications for American citizens takes precedence over NIV applications. Consular officials at posts we visited noted that because of the uncertainty over future passport demand, they will depend on their flexibility to shift adjudicators from NIV work to passport work, as needed. In addition, consular officials stated they will have the option of using NIV interview windows to adjudicate passports applications—possibly during off hours, if necessary. In addition, posts are seeking ways to become more efficient in how they process the increasing volume of passports. For example, many posts have recently implemented an appointment system to better manage the flow of passport applicants and have also improved their Web sites to help provide better assistance to applicants, many of whom do not speak English and are applying for passports for the first time. State is also upgrading its software used for passport processing in overseas posts to enable posts to scan passport applications, which they expect will reduce staff resources needed for data entry. Some posts are also considering increased use of consular agents in other locations, such as Puerto Vallarta or Cabo San Lucas, to accept passport applications to help relieve some of the workload for consular staff. In addition, some posts have suggested exploring possibilities for processing passport renewals by mail, which would also help relieve overcrowding. In anticipation of the expected surge in demand for NIVs and U.S. passports in Mexico over the next several years, State has taken several steps to project workloads and expand the capacity of its consulates to avoid the type of backlogs that have occurred in Mission Mexico in the past. State’s efforts to increase the number of hardened interview windows at several of its consulates and hire additional temporary consular officers represent a substantial increase in resources needed to keep pace with the projected surge in NIV and passport workload. As State continues to revise its estimates of future workload, it may need to adjust its plans for increasing these resources to reflect the latest assumptions about future demand for passports and NIVs. The success of the efforts to prepare for the surges in passport and NIV workload is likely to depend on State’s ability to fill the roughly 100 slots it has budgeted for temporary adjudicators in time to meet the surge in workload. Several posts in Mexico will rely heavily on these additional staff to keep pace with expected demand for NIVs and avoid excessive wait times for interviews of applicants. However, State officials have expressed confidence that they will be able to fill these positions with qualified candidates. In addition, Mission Mexico may reap productivity gains from a pilot program to outsource part of the NIV application process at off-site facilities and from State’s policy to waive interviews for some renewal applicants; however, these efforts are in their early stages and are not yet widely implemented. Consequently, it would be premature to assess the potential effects of these efforts. We discussed this testimony with State officials, who agreed with our findings. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Jess T. Ford at (202) 512-4128 or fordj@gao.gov. Juan Gobel, Assistant Director; Ashley Alley; Joe Carney; Howard Cott; David Dornisch; Michael Hoffman; and Ryan Vaughan made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Mission in Mexico is the Department of State's largest consular operation. In fiscal year 2007, it processed 1.5 million of the 8 million nonimmigrant visas (NIV) State handled worldwide. The U.S. Mission in Mexico also provided services, including passport processing and emergency assistance, to 20,000 American citizens in fiscal year 2007. This already significant consular workload is expected to increase dramatically in the coming years as millions of NIV Border Crossing Cards issued in Mexico between fiscal years 1998 and 2002 expire and need to be renewed. In addition, the implementation of new travel requirements under the Western Hemisphere Travel Initiative (WHTI) will, for the first time, require U.S. citizens to carry passports, or other approved documentation, when traveling between the United States and Mexico. This testimony addresses (1) State's estimates of the workload for consulates in Mexico through 2012 resulting from, in particular, new travel requirements and the reissue of Border Crossing Cards; and (2) the actions State has taken to ensure consulates in Mexico keep pace with projected workload increases through 2012. This testimony is based on work currently in process that involves analyzing State's workload forecasts and forecast methodology, interviewing State officials, and visiting five posts in Mexico. GAO discussed this testimony with State officials, who agreed with GAO's findings. According to State forecasts, as of April 2008, the U.S. Mission in Mexico's (Mission Mexico) NIV demand will peak at slightly over 3 million applications in fiscal year 2011, about twice the number from fiscal year 2007. State acknowledges there are uncertainties regarding the number of Border Crossing Card holders who will renew their cards and the number of first time NIV applicants, which may affect the accuracy of its forecasts. State will be revising the forecasts on a periodic basis as new data become available. In addition to its increase in NIV workload, Mission Mexico will also be facing increases in its passport workload due to the implementation of WHTI. The exact magnitude of the increase in passport workload is more difficult to forecast than for NIVs, because there is not the same historical precedent. There is also a great deal of uncertainty as to how many U.S. citizens actually live in Mexico or the number of these citizens likely to apply for a passport. In anticipation of this surge in demand for NIVs and U.S. passports, State is taking steps to ensure consulates in Mexico keep pace, including adding consular interview windows to several high-demand posts and planning to hire about 100 temporary adjudicating officers. Consular officials GAO met with at several posts in Mexico generally agreed that these efforts to expand resources should be adequate for Mission Mexico to keep pace with expected workload increases, and GAO's analysis indicates the mission will generally have enough interviewing windows during the surge. Several posts will rely on the addition of temporary adjudicators to keep pace with increased NIV demand and would face backlogs if these slots cannot be filled or if the temporary staff are not as productive as expected. However, State is confident that it has an adequate pool of potential applicants. Mission Mexico may also gain additional capacity from a pilot program, currently under way at two posts, that outsources a portion of the NIV application process to off-site facilities; however, the pilot was implemented too recently to assess its potential impact on productivity, fraud, or security.
California is the nation’s most populous state and the eighth-largest economy in the world. California is estimated to receive approximately $85 billion in Recovery Act funds, or about 10 percent of the funds available nationally. Nearly 80 percent of Recovery Act funding to states and localities is projected to be distributed within the first 3 years. Peak projected outlays are in fiscal year 2010, with outlays that year projected to be more than twice the level of fiscal year 2009 outlays. The California Recovery Task Force (Task Force), which was established by the Governor in March 2009, has overarching responsibility for ensuring that the state’s Recovery Act funds are spent efficiently and effectively and are tracked and reported in a transparent manner. The Task Force reports on the use and status of Recovery Act funds using the state’s recovery Web site (www.recovery.ca.gov). In addition to the Task Force’s efforts, other California entities with oversight responsibilities, including the State Auditor, have expanded the scope of their work to include a focus on state programs receiving Recovery Act funds. As of December 9, 2009, the Task Force estimated that approximately $53 billion has been allocated to California state agencies and local governments, nonprofits, local education agencies, and private companies through spending programs. The remaining portion, approximately $30 billion, is being provided to individuals and businesses in the form of direct tax relief. Approximately $33.7 billion has been awarded and $17.8 billion has been expended. As shown in figure 1, health, education, and labor accounted for almost 96 percent of California’s Recovery Act expenditures. The largest programs within these areas were the state Medicaid program and SFSF. To help measure the impact of the Recovery Act, the act contains numerous provisions that require recipients of Recovery Act funding to report quarterly on several measures. Nonfederal recipients of Recovery Act funds, such as state and local governments, private companies, educational institutions, and nonprofits, are required to submit reports with information on each project or activity, including amounts and a description of the use of funds and an estimate of the jobs created or retained. To collect this information, the U.S. Office of Management and Budget (OMB) and the Recovery Accountability and Transparency Board created a nationwide data collection system to obtain data from recipients, www.federalreporting.gov (FederalReporting.gov), and another site for the public to view and download recipient reports, Recovery.gov. Shortly before recipients could begin entering data into FederalReporting.gov for the second quarterly reporting period, OMB issued a memorandum for the heads of U.S. executive departments and agencies on December 18, 2009, updating its reporting guidance on the Recovery Act, in response to suggestions made by recipients, agencies, and our recommendations. The updated guidance focuses on issues related to data quality, nonreporting recipients, and reporting of job estimates, among other important reporting requirements. We previously reported that the Task Force, with the assistance of the state’s Chief Information Officer (CIO), created and deployed a central information technology system for state departments to report quarterly recipient report data. For the first two rounds of recipient reporting, California established a centralized reporting system, the California ARRA Accountability Tool (CAAT), which state agencies receiving Recovery Act funds used to report their data to the Task Force. California’s CIO, on behalf of the Task Force, was responsible for collecting the data from state agencies and uploading the data to FederalReporting.gov. California used Recovery Act funds to help balance the state fiscal year 2009-2010 budget, when the state faced a nearly $60 billion budget gap, and future budget shortfalls are expected. As discussed in our prior reports, California balanced its state fiscal year 2009-2010 budget by, among other things, making more than $31 billion in cuts, increasing taxes by $12.5 billion, and using over $8 billion in Recovery Act funds. However, California’s long-term fiscal prospects remain of concern. For example, in November 2009, the Legislative Analyst’s Office (LAO) estimated the size of the 2009-2010 and 2010-2011 budget shortfall at about $21 billion. According to the LAO, the main reasons for the budget gaps are: the inability of the state to achieve previous budget solutions in several areas, the effects of several adverse court rulings and, for 2010-2011, the expiration of various one-time and temporary budget solutions approved in 2009. The Governor’s 2010-2011 budget proposal was somewhat more optimistic and identified a $18.9 billion budget shortfall. Nonetheless, the budget gap constitutes roughly one-quarter of the state’s annual budget expenditures. The Governor declared a fiscal emergency on January 8, 2010, calling the legislature into special session to act on his proposed solutions to address the budget shortfall. Those proposed solutions include reductions in state programs, shifts of state funds to pay for general fund expenses, and requests for additional federal funds and greater flexibility. On January 22, 2010, the state Controller urged the state legislature and Governor to address the state’s projected budget and cash shortfalls for the remainder of the current fiscal year, as well as the next fiscal year, in order to protect California’s economic recovery, continue the financing of public works projects, and prevent even greater financial hardship. Further, the Controller stated that, if the budget situation is not resolved, the legislature and Governor will again face the prospect of a cash crisis beginning in July 2010. Local city and county governments in California are also struggling with declining revenues and budget problems. Additionally, local governments are affected by the fiscal situation of the state as a number of revenue sources—such as sales tax, gas tax, vehicle license, and many others— pass through the state. For example, in order to balance the California’s fiscal year 2009-2010 budget, state leaders agreed to borrow almost $2 billion in local property tax revenue and make $877 million in local government transportation revenue available to the state general fund for transit debt service. Officials we met with in the City of Los Angeles (Los Angeles) and the County of Sacramento said that they face budget shortfalls this fiscal year due to declines in state funding for programs, tax revenues, and fees. (Fig. 2 highlights information about the two local governments we reviewed.) For example, a Los Angeles official told us that, for the remainder of fiscal year 2010, they are trying to close a deficit of $212 million and have a projected $485 million deficit for fiscal year 2011. Sacramento County officials reported that the county is facing a nearly $14 million general fund budget shortfall for the remainder of fiscal year 2009-2010, and faces cuts of around $149 million for next fiscal year. According to government officials in both localities, Recovery Act funds are helping to preserve the delivery of essential services and repair infrastructure but have generally not helped stabilize their base budgets. Overall, as of February 18, 2010, a Los Angeles official reported that the city had been awarded about $597 million in Recovery Act grants, and Sacramento County officials reported the county had been awarded about $88 million in Recovery Act formula grants as of January 15. Most Recovery Act funds to local governments flow through existing federal grant programs. Some of these funds are provided directly to the local government by federal agencies, and others are passed from the federal agencies through state governments to local agencies. As shown in table 1, local officials reported their governments’ use of Recovery Act funds in program areas including public safety (Edward Byrne Memorial Justice Assistance Grant (JAG)) and Energy Efficiency and Conservation Block Grant (EECBG). Other Recovery Act funds received by these localities included formula grants for prevention of Internet crimes against children, public housing, emergency shelter, health centers, capital improvements, airport security and improvement, transportation, and additional competitive grant awards. Officials reported that Los Angeles has applied for about $893 million in additional Recovery Act grants, and the County of Sacramento has applied for an additional $330 million in competitive grants. In March 2009, California was apportioned $2.570 billion in Recovery Act funds for the restoration, repair, and construction of highways and other activities allowed under the Federal-Aid Highway Surface Transportation Program. As of February 16, 2010, the U.S Department of Transportation (DOT) Federal Highway Administration (FHWA) had obligated $2.525 billion (98 percent) of California’s apportionment. Highway funds are apportioned to states through federal-aid highway program mechanisms, and states must follow existing program requirements, which include ensuring each project meets all environmental requirements associated with the National Environmental Policy Act (NEPA), complying with goals to ensure disadvantaged businesses are not discriminated against in the awarding of construction contracts, and using American-made iron and steel in accordance with Buy American requirements. The Recovery Act also required that 30 percent of these funds be suballocated, primarily based on population, for metropolitan, regional, and local use. In California, according to state sources, a state law enacted in late March 2009, increased the suballocation so that more—62.5 percent of the $2.570 billion ($1.606 billion)—would be allocated to local governments for projects of their selection. The majority of Recovery Act highway obligations for California have been for pavement improvements—including resurfacing, rehabilitating, and constructing roadways. Of the funds obligated, approximately 65 percent ($1.643 billion) is being used for pavement widening and improvement projects, while 32 percent ($815 million) is being used for safety and transportation enhancements, and 3 percent ($68 million) for bridge replacement and improvement projects. Figure 3 shows obligations in California by the types of road and bridge improvements being made. According to information reported on Recovery.gov, as of December 31, 2009, California funded 761 highway infrastructure projects with Recovery Act funds. Fourteen percent, or 103 of these projects, were completed, 34 percent (268 projects) were under way, and about 51 percent (390 projects) had not yet started. Projects under way, which were in various stages of completion, accounted for over $1 billion in obligations, and projects that have been obligated funds but had not yet started, had an estimated value of almost $953 million. (See fig. 4 for an example of Recovery Act-funded pavement project.) Under both the Recovery Act and the regular Federal-Aid Highway Surface Transportation Program, California has considerable latitude in selecting projects to meet its transportation goals and needs. California Department of Transportation (Caltrans) officials reported using the state portion to fund state highway rehabilitation and maintenance projects that would not have otherwise been funded due to significant funding limitations. In addition to maintenance projects, the state has allocated Recovery Act funds to large construction projects, including one of the largest transportation investments, approximately $197.5 million for the construction of the Caldecott Tunnel, a new two-lane, bore tunnel connecting Contra Costa and Alameda counties. In addition, as previously mentioned, according to state officials, a March 2009 state law provided more funding directly to local governments, allowing a number of locally important projects to be funded. For example, $319 million in Recovery Act funds were obligated for 195 local projects in the Los Angeles area that may not have otherwise been funded in 2009, such as the Compton Boulevard resurfacing project. This project received approximately $750,000 in Recovery Act funds and would not have been funded for many years without these funds. As of February 16, 2010, $273 million of the $2.525 billion obligated to California highway projects had been reimbursed by FHWA. Although federal reimbursements in California have increased over time, from $22 million in September 2009 to $273 million, this rate, 11 percent, continues to be lower than the amount reimbursed nationwide, 25 percent ($6.3 billion) of the $25.1 billion obligated. As we reported in December 2009, Caltrans officials attributed the lower reimbursement rate to having a majority of its projects administered by local governments, which may take longer to reach the reimbursement phase than state projects, due to additional steps required to approve local highway projects. For example, highway construction contracts administered by local agencies generally call for a local review and a local public notice period, which can add nearly 6 weeks to the process. Additionally, Caltrans officials stated that localities with relatively small projects tend to seek reimbursement in one lump sum at the end of a project to minimize time and administrative cost. Caltrans has started to monitor pending invoices submitted by local agencies for Recovery Act projects to better assess how quickly Recovery Act funds are being spent. The Recovery Act required states to ensure that all apportioned Recovery Act funds were obligated within 1 year after apportionment and, according to Caltrans officials, as of February 18, 2010, 100 percent of California’s highway infrastructure Recovery Act apportionment has been obligated. If any states did not meet this requirement by March 2, 2010, the Secretary of Transportation would withdraw and redistribute the unobligated funding to other eligible states. Any Recovery Act funds that are withdrawn and redistributed are available for obligation until September 30, 2010. In addition to meeting the 1-year obligation deadline under the Recovery Act, Caltrans has also been working to meet two other Recovery Act requirements that do not exist in the regular Federal-Aid Highway Surface Transportation Program: (1) identification of economically distressed areas and (2) maintenance of effort. Identifying economically distressed areas. As we reported in December 2009, Caltrans revised its economically distressed areas determination using new guidance issued to states in August 2009 by FHWA, in consultation with the Department of Commerce, giving more direction on “special needs” criteria for areas that do not meet the statutory criteria in the Public Works and Economic Development Act. As a result, the number of counties considered distressed increased from 49 to all 58 counties. According to Caltrans officials, this new determination did not change how it funded or administered Recovery Act projects. Caltrans officials told us that, in selecting projects for funding, they first considered how quickly the project could be started and its potential to create and retain jobs, then considered the extent of need with each economically distressed area. The Recovery Act requires states to give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. Recently, FHWA reviewed the documentation that California used in its application of special needs criteria and determined that the data used were not consistent with FHWA guidance. Caltrans has been advised that the data must show a connection between demonstrated severe job losses and actual, identified firm closures and restructuring. On February 24, 2010, Caltrans officials reported that Caltrans was working to address FHWA’s data concerns by evaluating methods to assess the job losses without the use confidential data. Maintaining effort. While California is still reviewing its current maintenance-of-effort certification, it does not anticipate difficulty in maintaining the level of spending for transportation projects funded by the Recovery Act that it planned to spend as of February 17, 2009—the day the Recovery Act was enacted. California, like many other states, had to revise its initial March 5, 2009, certification, because the certification included a conditional statement, which was not permitted by the Recovery Act. On February 9, 2010, DOT requested that each state review its current certification and take any corrective action with regard to the state’s calculation of the maintenance-of-effort amount on or before March 11, 2010. Although California is reviewing its certification, Caltrans officials maintain that California expects to meet the planned level of spending, in part because the state reinstated a transportation bond program worth approximately $20 billion. The Recovery Act appropriated $5 billion for the Weatherization Assistance Program, which the Department of Energy (DOE) is distributing to each of the states, the District, and seven territories and Indian tribes, to be spent over a 3-year period. This program helps low- income families reduce their utility bills by making long-term energy efficiency improvements to their homes by, for example, installing insulation or modernizing heating or air conditioning equipment. DOE has limited states’ access to 50 percent of these funds and plans to provide access to the remaining funds once a state meets certain performance milestones, including weatherizing 30 percent of all the homes in its state plan that it estimates it will weatherize with Recovery Act funds. In addition, the Recovery Act requires all laborers employed by contractors and subcontractors on Recovery Act projects to be paid at least the prevailing wage, as determined under the Davis-Bacon Act. The Department of Labor (Labor) first established prevailing wage rates for weatherization in all of the 50 states and the District by September 3, 2009. DOE allocated approximately $186 million in Recovery Act funds for weatherization in California. This represents a large increase in funding over California’s annually appropriated weatherization program, which received about $14 million for fiscal year 2009. By June 2009, DOE had provided 50 percent—about $93 million—of the Recovery Act funds to the California Department of Community Services and Development (CSD), the state agency responsible for administering the state’s weatherization program. In late July, the state legislature approved CSD’s use of these funds. Of the funds received, CSD retained about $16 million to support oversight, training, and other state activities. CSD has begun distributing the remaining $77 million throughout its existing network of local weatherization service providers, including nonprofit organizations and local governments. According to CSD, as of January 25, 2010, CSD had awarded about $66 million of the $77 million to 35 local service providers throughout the state for planning, purchasing equipment, hiring and training, and weatherizing homes. This amount includes $14.3 million to two service providers for three of the four service areas in the County of Los Angeles. It also includes almost $3 million and $3.8 million, respectively, to the service providers for Orange and Riverside counties. CSD has not yet awarded the remaining funds—approximately $10 million—to service providers for the remaining part of the County of Los Angeles, parts of Alameda County, Alpine County, El Dorado County, Santa Clara County, San Francisco County, and Siskiyou County. For these areas, CSD has been either seeking a new service provider or is withholding funds pending the completion of an investigation of the designated service provider. CSD reported that, as of December 31, 2009, CSD and its service providers spent approximately $10 million—or about 5 percent—of the Recovery Act funds on weatherization-related activities. Also, according to CSD, 849 homes were weatherized as of February 26, 2010, which is less than 2 percent of the approximately 43,000 homes that CSD currently estimates will be weatherized with Recovery Act funds. In particular, 7 homes have been weatherized in the County of Los Angeles, and 0 and 20 homes have been weatherized in Orange and Riverside counties, respectively. Weatherization in California has been delayed, in part, because (1) CSD decided to wait until Labor determined the state’s prevailing wage rates, which occurred on September 3, 2009, and (2) after the prevailing wage rates were determined, local service providers raised concerns about an amendment CSD is requiring them to adopt to their Recovery Act weatherization contracts to ensure compliance with the act. CSD officials explained that, in anticipation of additional staffing and administration challenges for service providers, they wanted more clearly defined Davis- Bacon Act requirements, including the actual wage rates, before spending Recovery Act funds. CSD estimates that waiting for the wage rate determinations delayed weatherization in California for 2 to 3 months. CSD reported to us that, although the rate determinations for two of three weatherization-related job categories are mostly similar to what service providers currently pay, the rates for the third category—heating, ventilating, and air conditioning work—are much higher and will, thus, lead to cost increases. CSD also reported that it expects that the Davis- Bacon Act administrative requirements—including expanding existing administrative and accounting systems, updating payroll documentation and reporting, and increasing subcontractor monitoring—will have a substantial impact on program costs. For example, CSD must seek a replacement service provider for three of the previously discussed designated service areas because the existing three providers for these areas chose not to participate in the Recovery Act-funded weatherization activities due, in part, to concerns that the funding did not adequately support these increased administrative requirements. CSD also reported that its service providers have had difficulty identifying subcontractors willing to comply with the Davis-Bacon Act requirements. According to state officials, CSD is requiring service providers to adopt an amendment to their Recovery Act weatherization contracts to ensure that they comply with the Recovery Act, including certifying that they comply with the Davis-Bacon provisions, before providing Recovery Act funds to them to weatherize homes. Only two providers adopted the amendment by the initial October 30 deadline. According to CSD, many providers did not adopt the amendment because they objected to some of its provisions, including those pertaining to compensation, cost controls, and performance requirements. As a result, CSD entered into negotiations with providers and formally issued a modified amendment on December 17, 2009. However, prior to December 17, CSD announced steps that providers could take to accept the modified amendment in advance of its formal issuance and, thus, begin weatherizing homes sooner. Twenty-six service providers accepted the modified amendment in advance of the formal issuance and, to date, all active service providers have adopted the amendment. According to state officials, the amendment requires service providers to submit a wage plan for meeting the Davis-Bacon Act requirements before receiving any funds to weatherize homes. As of February 24, 2010, 26 service providers have submitted wage plans, all of which CSD has approved. Finally, CSD has plans to issue an additional contract amendment by the end of March, 2010 to, among other things, release new prevailing wages rates issued by Labor in December 2009. A CSD official told us that the department does not anticipate any delays in implementing this amendment. In a February 2, 2010, audit of CSD, the State Auditor reported that delays in weatherizing homes could jeopardize CSD’s ability to meet DOE’s performance milestones and, thus, its ability to timely access the remaining $93 million in Recovery Act weatherization funds. Thirty percent of all homes estimated to be weatherized in the state plans approved by DOE must be completed before the remaining funds may be accessed. The State Auditor also found that CSD needs to improve its control over cash management and that it lacks written procedures for preparing program reports. In its response to the report, CSD stated that it plans to meet DOE’s performance milestones by redirecting funds from areas without service providers to providers with the capacity to weatherize more homes. CSD also outlined steps it is taking to provide weatherization services to the previously discussed unserviced areas where it is either seeking a new service provider or withholding funds. Our prior reports have also highlighted delays in this program, and we plan to continue to follow California’s progress in using Recovery Act weatherization funds, including: Number of homes weatherized. Although CSD has developed quarterly targets for weatherizing enough homes to meet DOE’s performance milestones, it is too early to assess whether service providers are meeting these targets. However, as of February 26, 2010, CSD reported that the state had weatherized only 849 of the 3,912 homes targeted for the first quarter of the 2010 calendar year. Service areas without weatherization providers. According to CSD, 6 out of 43 designated service areas do not yet have service providers that are ready to begin weatherizing homes with Recovery Act funds. According to CSD’s latest estimates, these service areas account for 3,624—or over 8 percent—of the approximately 43,000 homes that it currently plans to weatherize with Recovery Act funds. Additional contract amendment forthcoming. In light of service providers’ resistance to CSD’s first contract amendment process, CSD cannot be certain that its upcoming attempt to revise contracts will not be met with some level of resistance from providers and, therefore, lead to additional delays in weatherizing homes. In response to the State Auditor’s findings, the Task Force stated that it is working with CSD to improve internal controls and streamline contract approvals and that the Task Force is committed to ensuring that California “does not leave one dollar of Recovery Act funding on the table.” As of February 19, 2010, California disbursed approximately $4.7 billion in Recovery Act education funds for three programs—SFSF; ESEA Title I, Part A, as amended; and IDEA, Part B. These funds were allocated to local educational agencies (LEA), special education local plan areas, and institutions of higher education (IHE). Specifically, California was allocated $5.47 billion in SFSF funds to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other government services. Under the Recovery Act, states must allocate 81.8 percent of their SFSF to support education (education stabilization funds), and the remaining 18.2 percent must be used for public safety and other government services, which may include education programs. California has received about $1.1 billion in SFSF government services funds that it used for payroll costs for its corrections system and has received about $4 billion in SFSF education stabilization funds. California also received approximately $464 million in Recovery Act ESEA Title I, Part A funding, which supports education for disadvantaged students and about $286 million in IDEA funding, which supports special education efforts. The majority of LEAs in California said they anticipate using more than half of their Recovery Act funds to retain jobs. As of December 31, 2009, the California Department of Education (CDE) reported that LEAs in the state funded a total of nearly 50,000 education jobs—mostly teachers— with the three Recovery Act education funding programs in our review, with approximately 39,000 of those jobs funded by SFSF. In the Los Angeles Unified School District (LA Unified), according to district officials, almost 6,400 jobs were funded by the three Recovery Act programs. LA Unified officials said that, without the Recovery Act funds, teacher layoffs could have caused increased class size, with a resulting loss of individual attention to each student. Yet, even with SFSF funds, an estimated 50 percent of the California LEAs reported that they expect job losses. Recently, officials from two large California LEAs told us that their districts anticipate teacher and other staff layoffs for the next school year to address budget shortfalls. According to a senior LA Unified official, the district may face teacher and support staff cuts of 7,000 to 8,000 to balance its budget for the 2010-2011 school year. While LEAs are using a large portion of their Recovery Act funds for jobs, LEAs we met with told us they also planned to use funds for other eligible activities, such as purchasing textbooks and funding deferred facility maintenance, among other program uses. We visited two LEAs in California—the Los Angeles Unified School District and Alvina Elementary Charter School in Fresno County—to find out more about how they are spending Recovery Act funds, see table 2 for a description of these uses. LEAs also awarded contracts for services and materials using Recovery Act funds. Although including provisions related to the Recovery Act is not a requirement under the act, LEA officials we met with stated that including Recovery Act provisions in contracts could have been useful in helping vendors understand Recovery Act requirements, including reporting requirements. However, none of the contracts we reviewed included provisions related to Recovery Act requirements. We met with seven LEAs that awarded contracts using either SFSF or ESEA Title I Recovery Act funds, or both, for services, such as tutoring, professional development for teachers, for special programs for students, and for equipment. According to LEA officials and our review of contracts, contract terms did not include specific Recovery Act requirements, such as wage rate requirements, whistle blower protection, and reporting requirements. LEA officials stated that they neither received guidance from CDE regarding the administration of Recovery Act contracts, nor were they aware of Recovery Act specific contract terms and conditions. Two of the LEAs we met with told us that they plan to include Recovery Act terms and conditions in future contracts. Our prior reports highlighted concerns related to CDE’s and LEAs’ ESEA Title I, Part A, cash management practices—specifically CDE’s early drawdown of ESEA Title I Recovery Act funding and the release of $450 million (80 percent) of the funds to LEAs on May 28, 2009. According to CDE officials, the drawdown was in lieu of its normally scheduled drawdown of school year 2008-2009 ESEA Title I funds and, therefore, the schools would be ready to use the funds quickly. However, in August 2009, we contacted the 10 LEAs in California that had received the largest amounts of ESEA Title I, Part A Recovery Act funds and found that 7 had not spent any of these funds and that all 10 reported large cash balances— ranging from $4.5 million to about $140.5 million. This raised issues about the state’s compliance with applicable cash management requirements. In response to cash management concerns, CDE implemented a pilot program to help monitor LEA compliance with federal cash management requirements. The program uses a Web-based quarterly reporting process to track LEA cash balances. Currently, the pilot program collects cash balance information from LEAs that receive funds under one relatively small non-Recovery Act program. CDE officials told us that they plan to expand the pilot to include regular and Recovery Act ESEA Title I, Part A, and SFSF by October 2010. CDE has collected data from LEAs for two quarters and has conducted an analysis to compare drawdown amounts from prior fiscal years. However, CDE has not yet established performance goals for the pilot program or developed a program evaluation plan. We also raised concerns about the inconsistent interest calculation and payment remittance processes at LEAs in California. CDE has since developed an interest calculation methodology and, on January 25, 2010, provided guidance to all LEAs on calculating and remitting interest on federal cash balances. CDE officials also told us that they plan to monitor LEA remittance of interest from Recovery Act funded programs by reviewing expenditure data LEAs submit in their quarterly recipient reports and verifying that the LEA remitted appropriate interest amounts. However, CDE has not yet developed mechanisms to help ensure LEAs are using sound interest calculation methods and promptly remitting interest earned on federal cash advances for non-Recovery Act funded programs. We plan to continue following this cash management issue in our ongoing bimonthly work. Since the Recovery Act was enacted in February 2009, California oversight entities and state agencies have taken various actions to oversee the use of Recovery Act funds. State oversight entities, for example, have conducted risk assessments of internal control systems and provided guidance to recipients of Recovery Act funds. In our previous reports on Recovery Act implementation, we discussed the oversight roles and activities of key entities in California for Recovery Act funds. In addition to these entities, state agencies are responsible for, and involved in, oversight and audits of Recovery Act programs. Although certain federal agencies and Inspectors General also have various oversight roles, our review has focused on the state efforts. As mentioned in our previous reports, the Task Force was established by the Governor to track Recovery Act funds that come into the state and ensure that those funds are spent efficiently and effectively. The Task Force is relying on California’s existing internal control framework to oversee Recovery Act funds, supplemented by additional oversight mechanisms. Several agencies and offices play key roles in overseeing state operations and helping ensure compliance with state law and policy. The key oversight entities are the Task Force, the state’s Recovery Act Inspector General, and the State Auditor. Their key oversight roles are summarized in table 3. As California gained more experience in implementing the Recovery Act during the past year, state oversight entities have taken actions to evaluate and update controls and guidance related to Recovery Act funds. For example, the Task Force prepared and issued 30 Recovery Act Bulletins to provide instructions and guidelines to state agencies receiving Recovery Act funds on topics ranging from recipient reporting requirements related to jobs to appropriate cash management practices. Additionally, the California Recovery Act Inspector General coordinated seven fraud prevention and detection training events throughout the state for state and local agencies and the service provider community, with presentations from federal agencies on measures to avoid problems and prevent fraud, waste, and abuse. Over 1,000 state and local agency staff attended training events, which were also available through a Webinar. As of December 2009, the California State Auditor’s office published five letters or reports on the results of early testing and/or preparedness reviews conducted on 25 Recovery Act programs at nine state departments that are administering multiple Recovery Act programs. These audit reports resulted in numerous recommendations to state agencies aimed at improving oversight of Recovery Act funds. California agency officials and internal auditors, from state departments that manage transportation, education, and weatherization programs, are engaged to various degrees in the oversight and auditing of Recovery Act funds. Table 4 provides an overview of selected oversight and auditing activities of these agencies. As reported on Recovery.gov, as of February 23, 2010, California recipients reported funding 70,745 jobs with Recovery Act funds during the second quarterly reporting period ending on December 31, 2009. This was the largest number of jobs reported by any state for this quarter. The Recovery Act provided funding through a wide range of federal programs and agencies. Over 30 California state agencies have or are expected to receive Recovery Act funds and were required to report job estimates. Figure 5 shows the number and share of jobs funded by state agencies receiving Recovery Act funds, as reported on Recovery.gov. Education programs accounted for approximately 71 percent, about 50,000 jobs—38,924 under SFSF, and 11,048 under other programs administered by CDE. Task Force officials reported that new reporting guidance issued by OMB—approximately 2 weeks before recipients were to begin reporting— was implemented by most state agencies, but the notable exception was CDE, which continued to follow the old guidance. On December 18, 2009, OMB updated its reporting guidance, and the Task Force advised California recipients that there were some notable changes, specifically as follows: Recipients do not have to determine if a particular employee or job classification would have been laid off without the receipt of Recovery Act funds (i.e., retained), as they did before. If a position is being funded by the Recovery Act, the hours should be included in the number of jobs created; Recipients are no longer required to sum hours across reporting quarters or provide cumulative totals. Instead, they report jobs on a quarterly basis, providing a quarterly snapshot; and Recipients will find the federal reporting system open in February to correct data reported during January. The new OMB guidance still required recipients to report jobs as FTE, but it further defined FTEs as the total number of hours worked and funded by Recovery Act dollars within the reporting quarter and provided guidance on applying the new formula. According to Task Force officials, CDE did not instruct LEAs to recalculate job estimates using the new OMB guidance. CDE plans to have LEAs revise job estimates reported during the second reporting period when CDE requests data for the third report, which will be due on March 15, 2010, to CDE. Until that time, the data available to the public for education-related jobs in California are not comparable to that reported by other states. Additionally, although CDE’s uncorrected job estimates for the second reporting period remain on the Recovery.gov Web site, the Task Force announced that it will not include CDE’s job estimates in its reports. In addition to not following OMB’s updated guidance on calculating FTEs, we also found that partly due to unclear guidance from CDE, LEAs we reviewed had collected and reported job information from vendors inconsistently. We met with seven LEAs—including LA Unified, the largest LEA in California—to gain an understanding of their processes for obtaining information necessary to meet Recovery Act reporting requirements. LEAs told us that they received reporting guidance from CDE, including calculating teacher and administrative jobs, but did not receive clear guidance on how to collect and report vendor jobs funded by the Recovery Act. As a result, LEAs we reviewed had varying jobs data collection processes. For example, one LEA that did not report vendor jobs for the second reporting period told us that, for future quarters, they plan to survey vendors to estimate the range of jobs created or retained (e.g., 1-5, 6-10, 11-15 jobs). Two other LEAs told us they did not contact vendors to collect data on jobs created or retained but reported the number of vendors with a Recovery Act contract. For instance, if the LEA had four contracts using Recovery Act funds during the reporting period, the LEA reported four vendor jobs. Officials from LEAs also reported confusion regarding CDE’s guidance to identify vendors—by reporting their name and zip code or Dun and Bradstreet Universal Numbering System number—that received payments of $25,000 or more in the quarter. Some LEAs did not collect and report job estimates from vendors with payments of less than $25,000 because they erroneously applied CDE’s guidance on vendor identification to determine which vendor jobs to report. According to an official from one of these LEAs, the number of vendor jobs it reported for the second quarter would increase from 12 to at least 77 if it collected job estimates from all of its vendors with Recovery Act contracts. As a result, some vendor jobs funded by the Recovery Act were not reported. On February 23, 2010, CDE issued updated guidance to LEAs, and other subrecipients, to assist them with the third Recovery Act reporting period. However, this guidance neither provided LEAs additional information on collecting and reporting vendor jobs, nor did it clarify that the vendor identification guidance was not applicable to the Recovery Act’s jobs reporting requirements. As the prime recipient, CDE is responsible for ensuring Recovery Act requirements are met, including reporting vendor jobs funded by the Recovery Act. We plan to continue to follow these reporting issues as part of our ongoing bimonthly work. Task Force officials stated that while OMB’s revised guidance on calculating FTEs for the second reporting period was easier to implement compared with the first period, other data issues made it difficult to report timely, accurate, and complete information. For example, the Task Force received error messages in FederalReporting.gov when the congressional district where the Recovery Act-funded project was located did not match the recipient address. The Task Force reported receiving more than 1,500 error reports for data it submitted to FederalReporting.gov related to congressional districts and zip codes, even though California’s CAAT system had mechanisms in place to try to prevent the entry of false congressional districts. In order to expedite these corrections, Task Force officials told us that they decided to change their data to what FederalReporting.gov would accept, rather than what they knew was correct in some instances. For example, if they knew a recipient had moved and had a new zip code, but FederalReporting.gov did not have the updated zip code for the recipient’s new address, the Task Force used the old zip code to get the report to upload successfully to FederalReporting.gov. Issues with zip codes also surfaced for local agencies that reported directly to Federalreporting.gov. For example, officials from the Los Angeles County Metropolitan Transportation Authority said they received an error message for an incorrect congressional district, because they initially used the congressional district in which the project was located as opposed to the agency’s headquarters office. Officials from the transportation authority interpreted OMB’s guidance as the congressional district in which the project/activity was being performed, but they later received clarification that the congressional district should be consistent with the recipient’s address. Mr. Chairman and Madame Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee or Subcommittee might have. For further information regarding this testimony, please contact Linda Calbom at (206) 287-4809 or calboml@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Guillermo Gonzalez, Chad Gorman, Richard Griswold, Susan Lawless, Gail Luna, Heather MacLeod, Emmy Rhine, Eddie Uyekawa, and Lacy Vong. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) specifies several roles for GAO, including conducting bimonthly reviews of selected states' and localities' use of funds made available under the act. This testimony is based on GAO's bimonthly work in California, where the Recovery Act provided more than $85 billion--or about 10 percent of the funds available nationally--for program funding and tax relief. This testimony provides a general overview of: (1) California's use of Recovery Act funds for selected programs, (2) the approaches taken by California agencies to ensure accountability for Recovery Act funds, and (3) the impacts of these funds. This testimony focuses on selected programs that GAO has covered in previous work including the use of Recovery Act funds by the state and two localities' --City of Los Angeles and County of Sacramento, Highway Infrastructure Investment, and the Weatherization Assistance Program. GAO also updated information on three education programs with significant Recovery Act funds being disbursed--the State Fiscal Stabilization Fund (SFSF), and Recovery Act funds for Title I, Part A, of the Elementary and Secondary Education Act of 1965 (ESEA), as amended, and Part B of the Individuals with Disabilities Education Act (IDEA),. GAO provided a draft of this statement to California state and local officials and incorporated their comments where appropriate. (1) State and Local Budgets: Despite the influx of Recovery Act funds, California continues to face severe budgetary pressures and estimates a current shortfall of as much as $21billion --roughly one-quarter of the state's annual budget expenditures. California's cities and counties are also struggling with budget problems. According to officials from the City of Los Angeles and County of Sacramento, Recovery Act funds are helping to preserve essential services and repair infrastructure but have generally not helped stabilize their base budgets. (2) Transportation Infrastructure: According to California officials, 100 percent of California's $2.570 billion highway infrastructure Recovery Act apportionment has been obligated. The state has dedicated most of these funds for pavement improvements--including resurfacing and rehabilitating roadways. (3) Weatherization Assistance: As of January 25, 2010, California had awarded about $66 million to 35 local service providers throughout the state for weatherization activities. State and federal requirements, such as prevailing wage rates, as well as the implementation of these requirements, have delayed weatherization and, as of February 26, 2010, the state had weatherized only 849 homes--less than 2 percent of the 43,000 homes that are estimated to be weatherized with Recovery Act funds. (4) Education: As of February 19, 2010, California had distributed approximately $4.7 billion for three education programs, including the SFSF. Local education agencies plan to use more than half of these funds to retain jobs; however, a majority reported that they still expect job losses. Also, cash management issues, related to federal cash balances and the calculation and remittance of interest, remain, but the California Department of Education has taken preliminary steps to resolve them.(5) Accountability: California oversight entities and state agencies have taken various actions to oversee Recovery Act funds, including training, risk assessments, on-site monitoring, and audits. The Governor established the Recovery Task Force to ensure funds are spent efficiently and effectively, and the State Auditor and Inspector General also have key oversight roles. (6) Jobs Reporting: Recipients reported that 70,745 jobs were funded in California during the last quarter of 2009. However, about 70 percent of these jobs were in education and were not reported using the Office of Management and Budget's (OMB) latest guidance, and therefore were not calculated consistently with other jobs reported.
The mission of the Customs Service is to ensure that all goods and persons entering and exiting the United States do so in compliance with all U.S. laws and regulations. It does this by (1) enforcing the laws governing the flow of goods and persons across the borders of the United States and (2) assessing and collecting duties, taxes, and fees on imported merchandise. During fiscal year 1997, Customs collected $22.1 billion in revenue at more than 300 ports of entry and reported that it processed nearly 450 million passengers who entered the United States during the year. To accomplish its mission, Customs is organized into six lines of business—trade compliance, outbound, passenger, finance, human resources, and investigations. Each business area is described below. Trade compliance includes enforcement of laws and regulations associated with the importation of goods into the United States. To do so, Customs (1) works with the trade community to promote understanding of applicable laws and regulations, (2) selectively examines cargo to ensure that only eligible goods enter the country, (3) reviews documentation associated with cargo entries to ensure that they are properly valued and classified, (4) collects billions of dollars annually in duties, taxes, and fees associated with imported cargo, (5) assesses fines and penalties for noncompliance with trade laws and regulations, (6) seizes and accounts for illegal cargo, and (7) manages the collection of these moneys to ensure that all trade-related debts due to Customs are paid and properly accounted for. Outbound includes Customs enforcement of laws and regulations associated with the movement of merchandise and conveyances from the United States. To do so, Customs (1) selectively inspects cargo at U.S. ports to guard against the exportation of illegal goods, such as protected technologies, stolen vehicles, and illegal currency, (2) collects, disseminates, and uses intelligence to identify high-risk cargo and passengers, (3) seizes and accounts for illegal cargo, (4) assesses and collects fines and penalties associated with the exportation of illegal cargo, and (5) physically examines baggage and cargo at airport facilities for explosive and nuclear materials. In addition, the outbound business includes collecting and disseminating trade data within the federal government. Accurate trade data are crucial to establishing accurate trade statistics on which to base trade policy decisions and negotiate trade agreements with other countries. By the year 2000, Customs estimates that exports will be valued at $1.2 trillion, compared to a reported $696 million in 1994. Passenger includes processing all passengers and crew of arriving and departing (1) air and sea conveyances and (2) land vehicles and pedestrians. In fiscal year 1997, Customs reported it processed nearly 450 million travelers and, by the year 2000, expects almost 500 million passengers to arrive in the United States annually. Many of Customs’ passenger activities focus on illegal immigration and drug smuggling and are coordinated with other federal agencies, such as the Immigration and Naturalization Service and the Department of Agriculture’s Animal and Plant Health Inspection Service. Activities include targeting high-risk passengers, which requires timely and accurate information, and physically inspecting selected passengers, baggage, and vehicles to determine compliance with laws and regulations. Finance includes asset and revenue management activities. Asset management consists of activities to (1) formulate Customs’ budget, (2) properly allocate and distribute funds, and (3) acquire, manage, and account for personnel, goods, and services. Revenue management encompasses all Customs activities to identify and establish amounts owed Customs, collect these amounts, and accurately report the status of revenue from all sources. Sources of revenue include duties, fees, taxes, other user fees, and forfeited currency and property. The revenue management activities interrelate closely with the revenue collection activities in the trade compliance, outbound, and passenger business areas. Human resources is responsible for filling positions, providing employee benefits and services, training employees, facilitating workforce effectiveness, and processing personnel actions for Customs’ 18,000 employees and managers. Investigations includes activities to detect and eliminate narcotics and money laundering operations. Customs works with other agencies and foreign governments to reduce drug-related activity by interdicting (seizing and destroying) narcotics, investigating organizations involved in drug smuggling, and deterring smuggling efforts through various other methods. Customs also develops and provides information to the trade and carrier communities to assist them in their efforts to prevent smuggling organizations from using cargo containers and commercial conveyances to introduce narcotics into the United States. To carry out its responsibilities, Customs relies on information systems and processes to assist its staff in (1) documenting, inspecting, and accounting for the movement and disposition of imported goods and (2) collecting and accounting for the related revenues. Customs expects its reliance on information systems to increase as a result of its burgeoning workload. For 1995 through 2001, Customs estimates that the annual volume of import trade between the United States and other countries will increase from $761 billion to $1.1 trillion. This will result in Customs processing an estimated increase of 7.5 million commercial entries—from 13.1 million to 20.6 million annually—during the same period. Recent trade agreements, such as the North American Free Trade Agreement (NAFTA), have also increased the number and complexity of trade provisions that Customs must enforce. Customs recognizes that its ability to process the growing volume of imports while improving compliance with trade laws depends heavily on successfully modernizing its trade compliance process and its supporting automated systems. To speed the processing of imports and improve compliance with trade laws, the Congress enacted legislation that eliminated certain legislatively mandated paper requirements and required Customs to establish the National Customs Automation Program (NCAP). The legislation also specified certain functions that NCAP must provide, including giving members of the trade community the capability to electronically file import entries at remote locations and enabling Customs to electronically process “drawback” claims. In response to the legislation, Customs began in 1994 to modernize the information systems that support operations. Customs has several projects underway to develop and acquire new software and evolve (i.e., maintain) existing software to support its six business areas. Customs’ fiscal year 1998 budget for information management and technology activities was about $147 million. Customs’ major information technology effort is its Automated Commercial Environment (ACE) system. In 1994, Customs began to develop ACE to replace its existing automated import system, the Automated Commercial System (ACS). ACE is intended to provide an integrated, automated information system for collecting, disseminating, and analyzing import-related data and ensuring the proper collection and allocation of revenues, totaling about $19 billion annually. According to Customs, ACE is planned to automate critical functions that the Congress specified when it established NCAP. Customs reported that it spent $47.8 million on ACE as of the end of fiscal year 1997. In November 1997, Customs estimated it would cost $1.05 billion to develop, operate, and maintain ACE over the 15 years from fiscal year 1994 through fiscal year 2008. Customs plans to deploy ACE to more than 300 ports that handle commercial cargo imports. Customs plans to develop and deploy ACE in multiple phases. According to Customs, the first phase, known as NCAP, is an ACE prototype. Customs currently plans to deploy NCAP in four releases. The first release was deployed for field evaluation at three locations in May 1998, and the fourth is scheduled for 1999. Customs, however, has not adhered to previous NCAP deployment schedules. Specifically, implementation of the NCAP prototype slipped from January 1997 to August 1997 and then again to a series of four releases beginning in October 1997, with the fourth release starting in June 1998. Customs also has several other projects underway to modify or enhance existing systems that support its six business areas. For example, in fiscal year 1998, Customs planned to spend about $3.7 million to enhance its Automated Export System (AES), which supports the outbound business area and is designed to improve Customs’ collection and reporting of export statistics and to enforce export regulations. In addition, Customs planned to spend another $4.6 million to maintain its administrative systems supporting its finance and human resource business areas. The Chairman, Subcommittee on Treasury and General Government, Senate Committee on Appropriations, and the Chairman, Subcommittee on Treasury, Postal Service and General Government, House Committee on Appropriations, requested that we review Customs’ ability to develop software for its computer systems. Our objectives were to determine (1) the maturity of Customs’ software development processes and (2) the effectiveness of Customs’ software process improvement program. To determine Customs’ software development process maturity, we applied the Software Engineering Institute’s (SEI) Software Capability ) and its Software Capability Evaluation (SCE) method. SEI’s expertise in software process maturity as well as its capability maturity models and evaluation methods are widely accepted throughout the software industry. All our specialists were SEI-trained. The SW-CMM ranks organizational maturity according to five levels. (See figure 1.1.) Maturity levels 2 through 5 require the verifiable existence and use of certain software development processes, known as key process areas (KPA). According to SEI, an organization that has these processes in place is in a much better position to successfully develop software than an organization that does not have these processes in place. We evaluated Customs’ software development processes against five of the six level 2 KPAs. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort. The sixth level 2 KPA, software subcontract management, was not evaluated because Customs did not use subcontractors on any of the projects that we evaluated. (See table 1.1.) As established by the model, each KPA contains five common attributes that indicate whether the implementation and institutionalization of a KPA can be effective, repeatable, and lasting. The five common features are: Commitment to perform: The actions that the organization must take to establish the process and ensure that it can endure. Commitment to perform typically involves establishing organizational policies and senior management sponsorship. Ability to perform: The preconditions that must exist in the project or organization to implement the software development process competently. Ability to perform typically involves resources, organizational structures, and training. Activities performed: The roles and procedures necessary to implement a KPA. Activities performed typically involve establishing plans and procedures, performing the work, tracking it, and taking appropriate management actions. Measurement and analysis: Activities performed to measure the process and analyze the measurements. Measurement and analysis typically includes defining the measurements to be taken and the analyses to be conducted to determine the status and effectiveness of the activities performed. Verifying implementation: The steps to ensure that the activities are performed in compliance with the process that has been established. Verification typically encompasses reviews by management. In accordance with SEI’s SCE method and, for five of the six KPAs in level 2, we evaluated Customs’ institutional policies and practices and compared project-specific guidance and practices against the five common attributes. This project-specific comparison can result in one of four possible outcomes: (1) project strength—an effective implementation of the key practice, (2) project weakness—ineffective implementation of a key practice or failure to implement a key practice, (3) project observation—key practice evaluated but evidence inconclusive and cannot be characterized as either strength or weakness, and (4) not rated—key practice not currently relevant to project, therefore, not evaluated. We performed the project-specific evaluations on three ongoing Customs software development projects, each of which is described below. As requested by the Subcommittee Chairmen, one of the projects evaluated was ACE, which is the largest and most important system that Customs is developing. The other two projects were selected by Customs on the basis of the following GAO specified criteria: (1) each project should be managed by a different software team, (2) at least one project should involve a legacy system, (3) at least one project should involve Year 2000 software conversion, and (4) each project should be relatively large and important to accomplishing Customs’ mission. The projects we evaluated are: National Customs Automation Program (NCAP 0.1): NCAP 0.1 was the first component of the National Customs Automation Program Prototype (NCAP/P). NCAP/P, in turn, is the first phase of the Automated Commercial Environment (ACE). Customs began developing ACE in 1994 to address the new import processing requirements established by the National Customs Automation Program. ACE is also intended to replace the agency’s legacy automated import system, the Automated Commercial System (ACS). NCAP 0.1 was installed at three field locations in May 1998. Automated Export System (AES): AES is an export information gathering and processing system, developed through cooperative efforts by Customs, the Bureau of Census, other federal agencies with export missions, and the export trade community. AES is designed to improve the collection of trade statistics; assist in the creation of a paperless export environment; facilitate the release of exports subject to licensing requirements; and consolidate export data required by several government agencies, easing the data filing burden for exporters while streamlining the federal data collection process. Customs installed AES in all U.S. vessel ports in October 1996, and currently it is operational in all ports, including air, rail, and truck transit ports. Customs and Census officials estimate that they spent approximately $12.9 million to develop and implement AES from fiscal year 1992 to 1997. These costs included, among other things, expenses for contractors, travel, and training. According to Customs’ and Census’ figures, both agencies estimate that together they will spend an additional $32.2 million through fiscal year 2002 on AES implementation and maintenance. Administrative Security System: The Administrative Security System assists users in requesting access to administrative systems. Users’ requests are electronically submitted to the appropriate official for approvals. In addition, other portions of the Administrative Security System provide functionality to allow the System Administrators the ability to prepare and maintain user profiles, request logs, and electronic approval and disapproval reports. To assess the effectiveness of Customs’ software process improvement program, we interviewed the Director, Technical Architecture Group, Office of Information and Technology, to determine: (1) process improvements that are planned and underway, (2) the rationale for each initiative, (3) the relative priority of each, (4) progress made on each initiative, and (5) obstacles, if any, impeding progress. We also reviewed past process improvement plans, meeting minutes, and related documentation. Further, we reviewed SEI’s model for software process improvement, known as IDEALSM. IDEAL defines five sequential phases of software process improvement that can be used to develop a long range, integrated plan for initiating and managing a software process improvement program. Customs provided written comments on a draft of this report. These comments are presented and evaluated in chapter 8, and are reprinted in appendix I. We performed our work at Customs’ Newington, Virginia, Data Center from February 1998 through November 1998, in accordance with generally accepted government auditing standards. The purpose of requirements management is to establish agreement between the customer and the software developers of the customer’s requirements that will be implemented by the software developers. This agreement typically is referred to as the “system requirements allocated to the software.” The agreement covers both technical and nontechnical (e.g., delivery dates) requirements. The agreement forms the basis for estimating, planning, performing, and tracking the software developer’s activities throughout the software life cycle. According to the SW-CMM, a repeatable requirements management process, among other things, includes (1) documenting the system requirements allocated to software, (2) providing adequate resources and funding for managing the allocated requirements, (3) following a written organizational policy for requirements management, (4) having a quality assurance group that reviews the activities and work products for managing allocated requirements and reports the results, (5) using the allocated requirements as the basis for software plans, work products, and activities, and (6) training members of the software engineering group to perform their requirements management activities. All three projects had practice strengths in this KPA. For example, each project documented the system requirements allocated to software and ensured that adequate resources and funding for managing the allocated requirements were provided. One of the projects, NCAP 0.1, had strengths in all but two practices under this KPA; however, each practice weakness is significant. Collectively, the projects had many weaknesses in this KPA, and thus Customs’ requirements management processes do not meet “repeatable” maturity level criteria. For example, none of the projects had a written organizational policy governing requirements management, and none had a quality assurance group for reviewing and reporting on the activities and work products associated with managing the allocated requirements. In the absence of these two practices, management is missing two means for ensuring that software requirements are managed in a prescribed manner. Also, two of the projects did not use the allocated software requirements as the basis for software plans, work products, and activities, which increases the risk that the software developed will not fully satisfy requirements. Further, members of two projects’ software engineering groups were not trained to perform requirements management activities, thus increasing the chances of mismanagement. Table 2.1 provides a comprehensive list of the three projects’ strengths and weaknesses for the requirements management KPA. The specific findings supporting the practice ratings cited in table 2.1 are in tables 2.2 through 2.4. While Customs’ projects had several practice strengths in this KPA, the number and significance of their practice weaknesses mean that Customs’ ability to manage software requirements is not repeatable. As a result, Customs is at risk of producing systems that fail to provide promised capabilities, and cost more and take longer than necessary. The purpose of software project planning is to establish reasonable plans for performing the software engineering and for managing the software project. According to the SW-CMM, a repeatable software project planning process, among other things, includes (1) documenting the software project plan, and preparing plans for software engineering facilities and support tools, (2) identifying the work products needed to establish and maintain control of the software project, (3) following a written organizational policy for planning a software project, (4) having a quality assurance group that reviews the activities and work products for software project planning and reports the results, (5) estimating the software project’s efforts and costs, and estimating its critical computer resources according to a documented procedure, (6) making and using measurements to determine the status of planning activities, and (7) training personnel in software project planning and estimating. All of the projects that we evaluated had key practice strengths in this KPA. For example, all had strengths in (1) documenting a software project plan and preparing plans for the software engineering facilities and support tools needed to develop the software and (2) identifying the work products needed to control the software project. NCAP 0.1, in particular, had many additional practice strengths. However, many significant practice weaknesses were found in all three projects. None of the projects followed an organizational software project planning policy, and none had a quality assurance group conducting reviews and/or audits. As a result, the projects performed these practices differently and inconsistently, and controls were unreliable. For example, while the NCAP 0.1 project followed a documented procedure for estimating the size of software work products (or changes to the size of work products), and made and used measurements to determine the status of software planning activities, neither of the other two projects performed these practices and none of the projects had personnel trained in software project planning and estimating. Such project planning weaknesses mean that management has no assurance that it will get the consistent, complete, and reliable information about the projects’ expected costs and schedules needed to make expeditious and informed investment decisions. Table 3.1 provides a comprehensive list of the three projects’ strengths, weaknesses, and observations for the software project planning KPA. The specific findings supporting the practice ratings cited in table 3.1 are in tables 3.2 through 3.4. Effective planning is the cornerstone of successful software development project management. While Customs showed some strengths in this KPA, its many weaknesses render its software project planning processes unrepeatable. Therefore, Customs has no assurance that the projects are effectively establishing plans, including reliable projections of costs and schedules, and effectively measuring and monitoring progress and taking needed corrective actions expeditiously. The purpose of software project tracking and oversight is to provide adequate visibility into the progress of the software development so that management can act effectively when the software project’s performance deviates significantly from the software plans. Software project tracking and oversight involves tracking and reviewing the software accomplishments and results against documented estimates, commitments, and plans, and adjusting these plans based on the actual accomplishments and results. According to the SW-CMM, effective software project tracking and oversight, among other things, includes (1) designating a project software manager to be responsible for the project’s software activities and results, (2) having a documented software development plan for tracking software activities and communicating status, (3) following a written organizational policy for managing the project, (4) conducting periodic internal reviews to track technical progress, plans, performance, and issues against the software development plan, (5) tracking the software risks associated with the cost, resource, schedule, and technical aspects of the project, (6) explicitly assigning responsibility for software work products and activities, (7) tracking the sizes of the software work products (or sizes of the changes to the software work products) and taking corrective actions as necessary, and (8) periodically reviewing the activities for software project tracking and oversight with senior management. The projects evaluated exhibited some software project tracking and oversight practice strengths. For example, all three of the projects had a project software manager designated to be responsible for the project’s software activities and results, and all had a documented software development plan for tracking software activities and communicating status. Also, NCAP 0.1 had strengths in all but five of this KPA’s 24 key practices. However, the three projects collectively had many weaknesses, and these weaknesses, including the five for NCAP 0.1, were significant and thus preclude Customs from meeting SEI’s repeatable maturity level criteria. For example, none of the projects followed a written organizational policy for managing the software project. With no established policy, Customs increases the risk that key tracking and oversight activities will not be performed effectively. For example, for two of the three projects, the project managers did not (1) conduct periodic internal reviews to track technical progress, plans, performance, and issues against the software development plan, (2) track software risks associated with cost, resource, schedule, and technical aspects of the project, (3) explicitly assign responsibility to individuals for software work products and activities, (4) track the sizes of the software work products (or sizes of the changes to the software work products) and take corrective actions, or (5) periodically review software project tracking and oversight activities with senior management. Table 4.1 provides a comprehensive list of the three projects’ strengths, weaknesses, and observations for the software project tracking and oversight KPA. The specific findings supporting the practice ratings cited in table 4.1 are in tables 4.2 through 4.4. Despite several practice strengths in this KPA, the number and significance of the practice weaknesses that we found mean that Customs’ current process for tracking and overseeing its projects is not repeatable, thereby increasing the chances of its software projects being late, costing more than expected, and not performing as intended. The purpose of software quality assurance is to independently review and audit the software products and activities to verify that they comply with the applicable procedures and standards and to provide the software project and higher-level managers with the results of these independent reviews and audits. According to the SW-CMM, a repeatable software quality assurance process, among other things, includes (1) preparing a software quality assurance plan for the project according to a documented procedure, (2) having a written organizational policy for implementing software quality assurance, (3) conducting audits of designated work processes and products to verify compliance, (4) documenting deviations identified in the software activities and software work products and handling them according to a documented procedure, and (5) having experts independent of the software quality assurance group periodically review the activities and work products of the project’s software quality assurance group. All of the projects evaluated had extensive and significant software quality assurance practice weaknesses. For example, two of the projects did not have a software quality assurance plan; and none of the projects (1) had a written organizational policy for implementing software quality assurance, (2) conducted audits of designated work products to verify compliance, (3) documented deviations identified in the software activities and software work products and handled them according to a documented procedure, or (4) had experts independent of the software quality assurance group periodically review the group’s work products. In fact, only one of the projects, AES, had any software quality assurance practice strengths, and these strengths were limited to only a few practices. In this case, the project had assigned responsibility for software quality assurance to a single individual and, for example, a software quality assurance plan had been drafted, although not according to a documented procedure. This virtual absence of software quality assurance on Customs’ software projects increases greatly the risk of software process and product standards not being met, which in turn increases the risk of software not performing as intended, and costing more and taking longer to develop than necessary. Table 5.1 provides a comprehensive list of the three projects’ strengths, weaknesses, and observations for the software quality assurance KPA. The specific findings supporting the practice ratings cited in table 5.1 are in tables 5.2 through 5.4. Customs’ software quality assurance process has many weaknesses and is, therefore, undefined and undisciplined. As a result, Customs cannot provide management with independent information about adherence to software process and product standards. To develop and maintain software effectively, Customs must adopt a structured and rigorous approach to software quality assurance. The purpose of software configuration management is to establish and maintain the integrity of the products of the software project throughout the project’s software life-cycle. Software configuration management involves establishing product baselines and systematically controlling changes to them. According to the SW-CMM, a repeatable software configuration management process, among other things, includes (1) preparing a software configuration management plan according to a documented procedure, (2) establishing a configuration management library system as a repository for the software baselines, (3) identifying software work products to be placed under configuration management, (4) controlling the release of products from the software baseline library according to a documented procedure, (5) following a written organizational policy for implementing software configuration management, (6) recording the status of configuration items/units according to a documented procedure, (7) making and using measurements to determine the status of the software configuration management activities, and (8) reviewing software configuration management activities with senior management on a periodic basis. Customs’ processes for software configuration management show strengths in several activities. For example, all three projects had developed software configuration management plans according to a documented procedure. Also, two of the projects (NCAP 0.1 and AES) established configuration management library systems as repositories for the software baselines, identified software work products to be placed under configuration management, and controlled the release of products from the software baseline library according to a documented procedure. However, the projects had many practice weakness that collectively jeopardize Customs’ ability to maintain the integrity of the projects’ software products. For example, none of the projects had a written organizational policy for implementing software configuration management, and none had documented procedures for recording the status of configuration items (e.g., code, documents). Moreover, none of the projects made or used measurements to determine the status of the software configuration management activities, or reviewed software configuration management activities with senior management on a periodic basis. Table 6.1 provides a comprehensive list of the three projects’ strengths and weaknesses for the software configuration management KPA. The specific findings supporting the practice ratings cited in table 6.1 are in tables 6.2 through 6.4. Customs has many configuration management process weaknesses, and thus its capability to establish and maintain the integrity of the wide range of software products is nonrepeatable and ineffective. Without a mature configuration management process, Customs can lose control of the current software product baseline, potentially producing and using inconsistent product versions and creating operational problems. To consistently develop software with specified functionality on time and within budget, Customs must improve its software development processes. According to SEI, an effective process improvement program includes (1) establishing a process improvement management structure, (2) developing a process improvement plan, (3) determining the organization’s baseline capability and using this as a basis for targeting process initiatives, and (4) dedicating adequate resources for implementing the plan. Although it has attempted in the past to initiate and sustain process improvement activities, these activities were terminated without having improved Customs processes. Currently, Customs has no software process improvement program. In 1996, SEI published a software process improvement model, called IDEAL. This model has five phases: Initiating, Diagnosing, Establishing, Acting, and Leveraging—IDEAL. Each of the phases is summarized below. Initiating phase: During this phase, an organization establishes the management structure of the process improvement program, defines and assigns roles and responsibilities, allocates initial resources, develops a plan to guide the organization through the first three phases of the program, and obtains management approval and funding for the program. Two key organizational components of the program management structure established during this phase are a management steering group and a software engineering process group (SEPG). Responsibility for this phase rests with senior management. Diagnosing phase: During this phase, the SEPG appraises the current level of software process maturity to establish a baseline of the organization’s process capability, and identifies any ongoing process improvement initiatives. The SEPG then uses the baseline to identify weaknesses and target process improvement activities. It also compares these targeted activities with any ongoing process improvement activities and reconciles any differences. Responsibility for this phase rests primarily with line managers and practitioners. Establishing phase: During this phase, the SEPG prioritizes the software process improvement activities and develops strategies for pursuing them. It then develops a process improvement action plan that details the activities and strategies and includes measurable goals for the activities and metrics for monitoring progress against the goals. Also during this phase, the resources needed to implement the plan are committed and training is provided for SEPG’s technical working groups, who will be responsible for developing and testing new or improved processes. Responsibility for this phase resides primarily with line managers and practitioners. Acting phase: In this phase, the work groups create and evaluate new and improved processes. Evaluation of the processes is based on pilot tests that are formally planned and executed. If the pilots are successful, the work groups develop plans for organization-wide adoption and institutionalization, and once approved, execute them. Responsibility for this phase resides primarily with line managers and practitioners. Leveraging phase: During this phase, results and lessons learned from earlier phases are assessed and applied, as appropriate, to enhance the process improvement program’s structure and plans. Responsibility for this phase rests primarily with senior management. In 1996, Customs initiated some limited software process improvement activities. Specifically, it hired a contractor to develop a process improvement plan, which was completed in September 1996. According to the plan, Customs was to reach CMM level 2 process maturity (the repeatable level) by 1998 and CMM level 3 (the defined level) by 2002. Customs began limited implementation of the plan in May of 1997, when it established process improvement teams for two KPAs—software project planning and project tracking and oversight. Generally, the teams were tasked with defining, implementing, and maintaining CMM-based processes for their respective KPAs. Customs did not staff or fund any other KPA improvement activities at this time. In August 1997, Customs discontinued all process improvement activities. Customs officials stated that this decision was based on the need to focus staff and resources on the agency’s Year 2000 conversion program. Currently, Customs does not have a software development process improvement program, and it has not taken steps to initiate one. Although it has assigned two people part-time to process improvement, it has not assigned organizational responsibility and authority, established a program management structure, developed a plan of action, and committed resources needed (trained staff and funding) to execute the plan. Customs does not have an effective software development process improvement program. As a result, it cannot expect to improve its immature software development processes. Customs develops and maintains software for systems that are critical to its ability to fulfill its mission. However, its software development processes are ad hoc and sometimes chaotic, and are not repeatable even on a project-by-project basis. As a result, Customs’ success or failure in developing software depends largely on specific individuals, rather than on well-defined and disciplined software management practices. This greatly reduces the probability that its software projects, whether new developments or maintenance of existing software, will consistently perform as intended and be delivered on schedule and within budget. For Customs software projects to mature beyond this initial level, the agency must implement basic management controls and instill self-discipline in its software projects. Customs acknowledges the importance of software process maturity and the need to improve its software development processes. However, it does not have a program for improving its software development processes and has not begun to establish one. Until it does, Customs has no assurance that its large investment in software development and maintenance will produce systems that perform needed functions, on time, and within budget. We recommend that, after ensuring that its mission-critical systems are Year 2000 compliant but before investing in major software development efforts like ACE, the Commissioner of Customs direct the Chief Information Officer to assign responsibility and authority for software development process develop and implement a formal plan for software development process improvement that is based on the software capability evaluation results contained in this report and specifies measurable goals and time frames, prioritizes initiatives, estimates resource requirements (trained staff and funding) and defines a process improvement management structure; ensure that every new software development effort in Customs adopts processes that satisfy at least SW-CMM level 2 requirements; and ensure that process improvement activities are initiated for all ongoing essential software maintenance projects. In its written comments on a draft of this report, Customs acknowledged the importance of software process improvement and maturity. Also, it agreed with GAO’s overall findings, including that Customs’ software development processes have not attained SW-CMM level 2 maturity. To address these weaknesses, Customs stated that it has taken the first step toward implementing our recommendations by assigning responsibility and authority for software process improvement as part of a reorganization of its Office of Information and Technology, which Customs stated will be implemented in early 1999. Customs further stated that once the reorganization is implemented, a formal software process improvement program will be established, and that this program will include definition of an action plan, commitment of resources, and specification of goals for achieving CMM levels 2 and 3. According to Customs, these improvement activities are in their early stages. When they are successfully implemented, they should address many of our recommendations. Customs also stated that because its legacy systems are aging and need to be enhanced and replaced, software process improvement must occur in parallel with continued software development investments. History has shown that attempting to modernize without first instituting disciplined software processes has been a characteristic of failed modernization programs. Until it implements disciplined software processes (i.e., at least level 2 process maturity), Customs cannot prudently manage major system investments, such as ACE with an estimated life cycle cost exceeding $1 billion. Customs’ comments also included a request to meet with us to discuss system-specific KPA practice strength and weakness determinations. We met prior to requesting comments on a draft of this report and then again on January 12, 1999, to discuss SEI’s SW-CMM requirements and the basis for our determinations. We are prepared to continue assisting Customs as it improves its software processes. Appendix I provides the full text of Customs’ comments and our responses to additional Customs comments not discussed above.
Pursuant to a congressional request, GAO reviewed the Customs Service's software development maturity and improvement activities, focusing on: (1) the maturity of Customs' software development processes; and (2) whether Customs has an effective software process improvement program. GAO noted that: (1) because of the number and severity of Customs' software development process weaknesses, Customs did not fully satisfy any of the key process areas (KPA) necessary to achieve the repeatable level of process maturity; (2) as a result, its processes for developing software, a complex and expensive component of Customs' systems, are ad hoc, sometimes chaotic, and not repeatable across projects; (3) Customs had some practice strengths in all but one of the five KPAs evaluated (i.e., requirements management, software project planning, software project tracking and oversight, software quality assurance, and software configuration management); however, GAO also found extensive and significant weaknesses in each of these KPAs; (4) some of these weaknesses were systemic, recurring in each of the KPAs; (5) for example, Customs had no written policy for managing or implementing any of the KPAs; (6) none of the projects had: (a) an approved quality assurance plan; (b) documented procedures for determining the project cost, schedule, or effort; or (c) any outside group reviewing or reporting on the project's compliance with defined processes; (7) these weaknesses are some of the reasons for Customs' limited success, for example, in delivering promised Automated Commercial Environment (ACE) capabilities on time; (8) Customs does not have a software development process improvement program, and it has not taken the basic steps to initiate one; (9) these steps, many of which are described in Software Engineering Institute's initiating, diagnosing, establishing, acting, and leveraging model for process improvement, include assigning responsibility and authority for process improvement, establishing a process improvement management structure, defining a plan of action, and committing needed resources; and (10) until Customs establishes an effective process improvement program, its software processes will remain poorly defined and undisciplined, and its software projects are likely to suffer cost, schedule, and performance shortfalls.